Techno Blender
Digitally Yours.

Will AI-Assistants Replace Applications? – DZone

0 21


Apps May Soon Become Redundant

The rapid evolution of generative AI has provoked another round of heated discussions about its effect on the future of technology—as well as our daily lives. I would like to jump in and speculate on how this may change the future of digital end-user products. 

Here is my main hypothesis: If AI assistants like ChatGPT continue to evolve at the same pace, we will witness the end of the era of the apps as we know them.

In the future, I probably wouldn’t want to install a separate app on my smartphone just to order food when I can ask an assistant to order “a sour snack to go with beer.” Why would I need a separate app to search for airline tickets and hotels when I can ask the assistant to plan my trip? Therefore, it is much more convenient to link a service provider to my assistant and interact with it in the most intuitive way, which is human dialogue.

It seems like digital products in the future will transform into something like AI adapters for the real world. It will provide AI assistants with access to a variety of offline services, such as ordering goods and services, making appointments at the gym or barbershop, and accessing news and entertainment content.

Why do I see it this way? In my opinion, ChatGPT, Siri or Alexa have strong potential to become a universal interface through which we will comfortably use a large number of services. Clearly, now it can only communicate by voice and text, but I am almost certain that in the future, it will be able to display information in an easier, better-structured way: in the form of product cards, online maps with pins, calendars, interactive widgets, visual instructions, and so on. By doing so, the market of AI assistants will offer an ultimate way of interacting with services.  

How should the service providers react to this? Simple enough: they should integrate their solutions into these growing ecosystems. The iOS SDK allows the creation of custom commands for Siri, and Alexa developers allow the creation of voice commands for third-party apps using API. Many products can already integrate with voice assistants in order to perform such tasks as playing music, managing smart homes, and checking the news or weather. However, these use cases are pretty basic; they only involve a direct request and response. Future development of these tools can lead to a more advanced scheme:

Abstract request →Analysis and supplier selection → Personalised response.

In my opinion, one of the potential advantages of this development is that it allows service creators to directly focus on the important things, such as the quality of the product and its monetization model, instead of wasting time on less relevant features, such as a recommendation system or a product display. All of these services and products will be available through a single interface.

The essence of these assistants may seem similar to the trendy super-apps, only more modular, but the approach is still a bit different. A super-app offers all its features to the user at the start, while an AI assistant doesn’t offer anything until it is demanded by the user.

This leads to another problem: how do you advertise your services if the user spends more time interacting with the assistant interface than with advertising platforms? I assume that this may be regulated by the same laws of the market as we know them—demand creates supply. And it is quite possible to imagine that every morning your assistant will offer you a paid digest with its new features based on suppliers’ advertisements.

The range of products that will have to adapt to the new format is quite wide, but still, there are some that are unlikely to be affected, at least not in the way I described above. These include applications where user interaction is at the center of the experience—for example, video games and other types of interactive entertainment. Indeed, this also includes any product based on an active input/output cycle.

We can also imagine how products that are only available online will be adapted to the new norm. Let’s take social networks as an example. I think it would be convenient to use them not only by scrolling and double-tapping but also by asking your AI-assistant things like “How is my friend Stephanie doing?” and getting a response of this type: “According to the latest posts from Stephanie, she is feeling down because AI is taking away jobs. Would you like to read more?”

In the corporate environment, large language models are already showing great potential in terms of compiling and interpreting reports, analyzing data, and assisting with routine tasks. In fact, people who are now actively using modern generative AI are more likely to use it to improve their work efficiency than to facilitate their off-work daily routine.

What’s Happening Now?

An example of a product that partly illustrates my vision is Humane’s recently released AI Pin, a smart pin with built-in GPT access. It has an unconventional but curious interaction interface—a combination of a laser projector and voice assistant, without third-party app support. The bot is integrated with the Tidal service that allows playing music—this is essentially the example of an “adaptor to the real world” concept that I mentioned above.

I agree that now this product seems strange and hard to use,  but in my opinion, it’s a good representation of my idea: the modern personal device market may soon come to its end thanks to the development of AI assistants. Interaction interfaces will grow to be more and more intuitive, such as through voice, gestures, or images. The interfaces of the services as we know them now may become less relevant. UX approaches may shift their focus from texts and buttons to data markup.

I have already given some examples of how this idea is being developed these days in the form of voice assistants. They have been on the market for a long time, but soon, they may become much more useful thanks to the integration of generative AI.

What Stands in the Way of This Scenario?

Voice assistants already allow many people to solve their routine tasks a little faster, yet truly smart assistants are still at the very early stage of their development.

One of the main unsolved problems is data security. Many large companies now forbid their employees to use ChatGPT in their work because of the great risk of leaks. Even though OpenAI has launched its Enterprise program, according to which they are obliged not to use the received data for model training, for regular users, this is still an acute issue. You don’t really want the AI to accidentally tell everyone about your personal preferences and habits or disclose any confidential information.

The second problem is performance. Right now, Microsoft is saying that they have to rethink their server infrastructure because of OpenAI. Training and running large language models, especially at the GPT-4 level, requires a huge amount of computational resources. It may simply not be enough for a qualitative leap in speed and service availability. This is why offline assistants that can actually solve the data security problem are still rather limited in their capabilities.

The third problem is that Large Language Models (LLM) are developing too quickly. Users simply do not have time to get used to them and get to know all the possibilities. Many people still see it more as a toy. Even if the first two problems were solved tomorrow, it might take quite some time for the market to adapt to this paradigm shift and understand the real potential of new generations of AI.

And while all these problems are being solved, I would recommend keeping a close eye on the current trends. Sure enough, OpenAI, Anthropic, Google, and other companies seem to be leading the way in innovation, and we can’t predict what it will all look like a year from now. Unfortunately, the industry giants have a monopoly on large-scale research in the area due to the huge infrastructure that is available to them.

However, service owners and service providers should probably start to estimate how their products may transform in the future.

What is your idea on the development of AI assistants? How can they affect the lives of end users and the whole digital solutions market?


Apps May Soon Become Redundant

The rapid evolution of generative AI has provoked another round of heated discussions about its effect on the future of technology—as well as our daily lives. I would like to jump in and speculate on how this may change the future of digital end-user products. 

Here is my main hypothesis: If AI assistants like ChatGPT continue to evolve at the same pace, we will witness the end of the era of the apps as we know them.

In the future, I probably wouldn’t want to install a separate app on my smartphone just to order food when I can ask an assistant to order “a sour snack to go with beer.” Why would I need a separate app to search for airline tickets and hotels when I can ask the assistant to plan my trip? Therefore, it is much more convenient to link a service provider to my assistant and interact with it in the most intuitive way, which is human dialogue.

It seems like digital products in the future will transform into something like AI adapters for the real world. It will provide AI assistants with access to a variety of offline services, such as ordering goods and services, making appointments at the gym or barbershop, and accessing news and entertainment content.

Why do I see it this way? In my opinion, ChatGPT, Siri or Alexa have strong potential to become a universal interface through which we will comfortably use a large number of services. Clearly, now it can only communicate by voice and text, but I am almost certain that in the future, it will be able to display information in an easier, better-structured way: in the form of product cards, online maps with pins, calendars, interactive widgets, visual instructions, and so on. By doing so, the market of AI assistants will offer an ultimate way of interacting with services.  

How should the service providers react to this? Simple enough: they should integrate their solutions into these growing ecosystems. The iOS SDK allows the creation of custom commands for Siri, and Alexa developers allow the creation of voice commands for third-party apps using API. Many products can already integrate with voice assistants in order to perform such tasks as playing music, managing smart homes, and checking the news or weather. However, these use cases are pretty basic; they only involve a direct request and response. Future development of these tools can lead to a more advanced scheme:

Abstract request →Analysis and supplier selection → Personalised response.

In my opinion, one of the potential advantages of this development is that it allows service creators to directly focus on the important things, such as the quality of the product and its monetization model, instead of wasting time on less relevant features, such as a recommendation system or a product display. All of these services and products will be available through a single interface.

The essence of these assistants may seem similar to the trendy super-apps, only more modular, but the approach is still a bit different. A super-app offers all its features to the user at the start, while an AI assistant doesn’t offer anything until it is demanded by the user.

This leads to another problem: how do you advertise your services if the user spends more time interacting with the assistant interface than with advertising platforms? I assume that this may be regulated by the same laws of the market as we know them—demand creates supply. And it is quite possible to imagine that every morning your assistant will offer you a paid digest with its new features based on suppliers’ advertisements.

The range of products that will have to adapt to the new format is quite wide, but still, there are some that are unlikely to be affected, at least not in the way I described above. These include applications where user interaction is at the center of the experience—for example, video games and other types of interactive entertainment. Indeed, this also includes any product based on an active input/output cycle.

We can also imagine how products that are only available online will be adapted to the new norm. Let’s take social networks as an example. I think it would be convenient to use them not only by scrolling and double-tapping but also by asking your AI-assistant things like “How is my friend Stephanie doing?” and getting a response of this type: “According to the latest posts from Stephanie, she is feeling down because AI is taking away jobs. Would you like to read more?”

In the corporate environment, large language models are already showing great potential in terms of compiling and interpreting reports, analyzing data, and assisting with routine tasks. In fact, people who are now actively using modern generative AI are more likely to use it to improve their work efficiency than to facilitate their off-work daily routine.

What’s Happening Now?

An example of a product that partly illustrates my vision is Humane’s recently released AI Pin, a smart pin with built-in GPT access. It has an unconventional but curious interaction interface—a combination of a laser projector and voice assistant, without third-party app support. The bot is integrated with the Tidal service that allows playing music—this is essentially the example of an “adaptor to the real world” concept that I mentioned above.

I agree that now this product seems strange and hard to use,  but in my opinion, it’s a good representation of my idea: the modern personal device market may soon come to its end thanks to the development of AI assistants. Interaction interfaces will grow to be more and more intuitive, such as through voice, gestures, or images. The interfaces of the services as we know them now may become less relevant. UX approaches may shift their focus from texts and buttons to data markup.

I have already given some examples of how this idea is being developed these days in the form of voice assistants. They have been on the market for a long time, but soon, they may become much more useful thanks to the integration of generative AI.

What Stands in the Way of This Scenario?

Voice assistants already allow many people to solve their routine tasks a little faster, yet truly smart assistants are still at the very early stage of their development.

One of the main unsolved problems is data security. Many large companies now forbid their employees to use ChatGPT in their work because of the great risk of leaks. Even though OpenAI has launched its Enterprise program, according to which they are obliged not to use the received data for model training, for regular users, this is still an acute issue. You don’t really want the AI to accidentally tell everyone about your personal preferences and habits or disclose any confidential information.

The second problem is performance. Right now, Microsoft is saying that they have to rethink their server infrastructure because of OpenAI. Training and running large language models, especially at the GPT-4 level, requires a huge amount of computational resources. It may simply not be enough for a qualitative leap in speed and service availability. This is why offline assistants that can actually solve the data security problem are still rather limited in their capabilities.

The third problem is that Large Language Models (LLM) are developing too quickly. Users simply do not have time to get used to them and get to know all the possibilities. Many people still see it more as a toy. Even if the first two problems were solved tomorrow, it might take quite some time for the market to adapt to this paradigm shift and understand the real potential of new generations of AI.

And while all these problems are being solved, I would recommend keeping a close eye on the current trends. Sure enough, OpenAI, Anthropic, Google, and other companies seem to be leading the way in innovation, and we can’t predict what it will all look like a year from now. Unfortunately, the industry giants have a monopoly on large-scale research in the area due to the huge infrastructure that is available to them.

However, service owners and service providers should probably start to estimate how their products may transform in the future.

What is your idea on the development of AI assistants? How can they affect the lives of end users and the whole digital solutions market?

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment