Generative Micro Apps Amplify the Human Workforce

Generative micro apps amplify the human workforce

Half of office workers will be supported by AI in one form or another by 2026, says Gartner analyst Nader Henein.

Generative micro-apps are an emerging technology that can enable organizations to leverage generative AI while minimizing risk to the business. The apps act as proxies between users and LLM, such as ChatGPT or Bard. How can enterprises use micro-apps to empower knowledge workers with generative AI and increase employee productivity?

Nader Henein: Gartner predicts that by 2026, 50 percent of office workers in Fortune 100 companies will be supported by AI in one form or another to either increase productivity or improve the average quality of work. Such general-purpose microapps will become commonplace in applications used every day in the workplace – word processors, email or conferencing tools.

Take, for example, an LLM augmented by its own research database. When an author writes a new research paper, a microapplication embedded in the word processor would read each paragraph and ask the LLM for examples of supporting research and data, as well as examples of contrarian research, using its predefined prompt library. Responses would be checked for accuracy by the microapplication and then provided in the form of suggestions or comments in the word processor.

This tool would extend the author’s capabilities beyond what is humanly possible. No individual could know all the published research in the database, but an LLM augmented with enterprise data can provide that capability.

What exactly is the role of such generative micro-apps?

Instead of a user interacting directly with an LLM, a micro-app has a pre-programmed set of instructions that perform a specific set of tasks on behalf of the user. There is no conversational/chat interface. Prompts are used to query the model and receive responses in a predefined format. This makes it easier for the logic within the micro-app to validate each response before returning it to the user. Generative micro-apps can be standalone, although in most cases they are embedded as extensions in productivity platforms that are often used by knowledge workers.

To what extent do generative micro-applications reduce the main risks of LLMs?

There are three main risks unique to LLMs: Access control, accuracy, and assessment. Micro-apps address each of these risks:

Access control: organizations have come to rely on access control, where an access rule is created and applied 100 percent of the time. If the rule fails, the system simply denies access. However, when an LLM is augmented with different types of enterprise data, there is no guarantee that access rules will be followed. Generative microapplications act as proxies for the enterprise LLM, meaning that they do not allow the user to interact directly with the model via chat. As such, they cannot be forced to disclose restricted data.

Accuracy: “Hallucinations” is the term used to describe how models occasionally give fictitious – yet safe and convincing – answers. Through rigorous prompt engineering, the preset prompts embedded in micro-apps can limit hallucinations. In addition, the micro-app can enforce that the answers given are in a format that the app can validate before giving them to the user.

Valuation: companies may not be willing to pay the same amount for products and services provided by an LLM rather than a group of trained and experienced professionals. Purpose-built micro-applications are being developed to serve as a complement to knowledge workers. This improves the average quality of work and increases productivity, which in turn helps alleviate the skills shortage. Since the work is still performed by professionals, the business model is protected from valuation risks.