Imagining the Future of Generative AI for UX

Predicting the future of Gen AI for UX research and design

Jun Li
UX Planet

--

We talk a lot about the pros and cons of current AI tools and how to apply them in our daily work as UX designers. Yes, they have areas of improvement and amazing efficiency that make people’s eyes light up with delight! How about the future of Gen AI tools? In what direction will these tools evolve?

Here are the predictions:

  1. A singular tool that incorporates UX research and design
  2. Diverse and inclusive data output
  3. Cover more screen types and for more industries

At the end, I talk about how UXers will play in the game.

Let’s start!

A singular tool that incorporates UX research and design

Now we have different Gen AI tools that target specific areas. For UX research, we can use conversational, text-based AI tools like Chat GTP for brainstorming and data analysis. We can use Whimsical, a plugin in Figma, to generates user flows. For UX design, we have various AI tools that target color, icons, Font, 3D models, images, wireframes, and prototypes.

The problem is that all Gen AI tools do not communicate with each other. Thus, the outputs created are not ideal.

Good UX design is based on UX research. They are the solutions to users’ problems. The same applied to AI tools. How can they create something wonderful even when they do not know the questions? It is the reason why we do not get satisfactory results most of the time using current tools. We grab a piece here and there that fits our needs to put together a result. Because we know the problems and the background, we understand what solutions are feasible.

Currently, UX designers communicate between different Gen AI tools and synthesize all the information together. More advanced Gen AI tools should shift the burden from UX designers to themselves through internal communication.

Gen AI tools should already understand the background and problems and generate results based on that.

Is it an App for a movie theater or a toy shop for toddlers? No need to ask, this information is already there. Which screens do you want to generate? How to interact and jump from this page to the next one? Just click on a step in the user flow, and you get all the needed wireframe. These wireframes will share the consistent ways of interactions. It will follow a logic.

Besides, UX designers will get more accurate data output that fits users’ needs.

Gen AI tools will have all the background information so UX designers no longer need to tell them what to generate via different prompts. If it is an app that sells toys for toddlers, all visuals will be around this theme.

UX designers will provide feedback on the outputs and refine the style. If one style is particularly good, other designs could be based on such a style. Thus, all design elements can inform each other.

In this way, all visual language will be aligned, from details such as the stroke weight, corner radius, colors to icons, photos and image styles. UX designers will not scratch their heads and think about appropriate prompts for specific platforms. They will have a single platform, a one-stop for all. They request a single style, which will apply to the entire design.

In conclusion, since UX design is research-informed, Gen AI tools for UX design share the need to communicate with each other. Right now, Gen AI tools for UX research and design are separated, which creates a gap. Different AI tools focus separately on specific areas that require UX designers to communicate multiple times with all the tools and find usable pieces.

In the future, Gen AI tools will become a singular, complete Gen AI tool that incorporates user research and design so that the design generated will be based on research results. At the least, we will have a platform that allows Gen AI tools to fetch data from other Gen AI tools to communicate with each other.

More diverse and inclusive data output

When Gen AI tools first came out, humans that were generated by AI were generally white males and females. People were awed and overwhelmed by their powerful calculations and outputs. As time went by, more and more people felt that including only one ethnic group can no longer meet their appetites. Thus, some Gen AI tools intentionally include more diverse data and generate images with more ethnicities.

The key to change is that people notice, discuss, and reflect on the ethical issues behind Gen AI’s outputs.

With more attention, people understand the areas of improvement and become cautious in deciding what type of data to feed the learning models.

Now, we need more diverse and inclusive data input for better output. However, to find out the right data, we need to address topics that challenge social norms and stereotypes.

For example, when asking Adobe Firefly to generate a beautiful person, all the people in the outputs share a light, white skin tone with almost perfect skin, lots of hair, and slim body shapes.

When reviewing these outputs, we would ask: What does beauty mean in humans? What gender and race should the person possess? Can’t a person with a shaved bald head also be considered beautiful? How about body shapes? With such reflections, we will know what type of data to feed the AI.

“Beautiful person” by Adobe Firefly

Another example is creating pictures that involve nudity. Right now, all platforms ban nudity, once and for all. However, what if these nude photos are for art and educational purposes?

When I ask Gen AI to draw me Venus de Milo, an ancient Greek marble sculpture, or David, the famous Italian Renaissance sculpture, Gen AI either dresses up the naked human body, like Adobe Firefly, or declines my request, like DALL-E3. Thus, I wonder, is it all right to alter the masterpieces and adjust without my permission? Why not have the same censorship with adult content as we had with movies and games?

“Venus de Milo, Sculpture by Alexandros of Antioch” by Adobe Firefly

Creating pictures involving violence is also a sensitive topic. Current Gen AI tools refuse to generate violence at all or even historical events that contain violence, like Adobe Firefly. Some other tools beautify violent, historical events without injuries or blood, like Dall E.

“The attack on Pearl Harbor; The attack on Pearl Harbor detail; The attack on Pearl Harbor where people die” by Dall E3

What if this violence happens in real history and photos of such historical events are hard to find because they are destroyed in ancient times? With the documentaries and books, I believe that we have all the needed data for accurate outputs. If we can generate historically violent photos, to what extent should we restore the details and violence? When the images depicted are so close to the truth, can we even use Gen AI as Wikipedia? How can we verify its authenticity then without references as we have now like with research papers?

Plus, to what extent we would expect Gen AI to depict realities is worth discussion. Now, Gen AI is bad at depicting specific places or real events. They take the prompts literally. For example, I asked Gen AI to create images of the Stanford prison experiment, a psychological experiment in August 1971. Some answers have nothing to do with such an experiment, while others depict an inaccurate image. Participants were college students, but images showed participants with a wide range of ages.

“Standford prison experiment” by Adobe Firefly and DALL-E3

While we need to be cautious when receiving results from Gen AI on real events and places, how close to the reality do we want Gen AI outputs to be? Will it contain privacy issues?

Gen AI tools have limitless potentials. I believe that future-generation AI tools will create more diverse and inclusive outputs as more and more people focus on ethical issues. It is already showing changes gradually as we consciously feed them with the missing data.

However, more uncovered topics need discussion. Solving ethical questions may need help from lawyers, sociologists, historians, and philosophers. They are not easy answers, but they are essential as the answers help us define the types of data to feed Gen AI. Ethical topics need to be addressed for a better world.

Cover more screen types and for more industries

Current Gen AI tools focus mostly on websites, iPADs, and mobile screens. Most of the products are directly to customers. As they become mature, the Gen AI tools will expand to more devices such as watches and AR and VR. The disciplines will expand to automobiles, medical devices, and more.

To-customer products will become homogeneous. Users will become familiar with the procedures and how to interact.

Take an e-commerce shopping site for example. User flows become alike. It is good since users will understand how to make a purchase no matter what products they are buying or sites they are purchasing from. No more learning curve is needed. However, it also means that Gen AI will develop in other areas to cover more platforms, screens, and industries such as Auto UX and medical devices. It is worth noting that Gen AI will only cover products that are accessible in the market and direct to customers. To-business products are not included since data privacy and protection makes their UX and user flows remain business secrets, at least to the public who are not their direct customers.

So, where will UXers play in the game?

  • User research

We will always design based on humans and for humans. Thus, user research is irreplaceable. Gen AI can hardly conduct user research for us because it cannot comprehend data and humans like a human.

  • Creative ideation and solution

Repetitive tasks will be replaced by AI. UX designers will put more time into creative ideation, testing, and innovative solutions to the problems.

  • Shift focus to less explored areas

UX designers will move to areas that are either less explored or require specific and professional industry knowledge. These disciplines have a higher bar of entry and strong logic and problem solving skills. For example, to-business products are more complicated. UX designers need to understand the problems and unpack the complicated processes into simple and understandable solutions. Less focus will be allocated to mature areas where people have already input a lot of time, like a lot of e-commerce products, such as coffee ordering apps.

Thank you so much for reading my article! Cheers!

--

--

A UX designer with a passion for enriching life through gamification and psychology