Machine learning (ML) and artificial intelligence (AI) are big news. They’ve been asked to everything from drive cars to predict civil war—so it’s no surprise to see the technology being applied to graphic design. But exactly how AI and design will work together remains to be seen.
Supporters claim they will augment design and free up creatives to produce even better work. Critics say they’ll leave designers out of jobs and start a race to the bottom. So who’s right, and what do ML and AI have to do with graphic design?
What is the difference between artificial intelligence and machine learning?
When many people think of AI they think about robots. But AI and ML aren’t necessarily about machines that can walk or talk—they’re about machines that can learn and make decisions. Machine learning describes IT systems that process large amounts of data, which they then independently use to recognize patterns and build better and better results. ML applications can detect faces, suggest the best route into the office or flag potential fraud from thousands of everyday online transactions.
Artificial intelligence is a broader category covering systems that can perform tasks that require human-like intelligence. ML is a major part of AI, but AI also includes functions such as reasoning (making logical inferences from data), language processing (so the AI can communicate naturally with humans) and planning (being able to set and accomplish goals).
The machines aren’t just on the rise… they’re getting creative
One of machine learning’s main strengths is its ability to churn through an enormous amount of data and produce sophisticated results. ML and AI play a big role as behind-the-scenes processors, crunching data to give us tailored suggestions for our next possible Netflix binge or comparing online essays to detect plagiarism.
But these tools are increasingly in the front seat. They can guide self-driving cars and develop recipes. AI has made films (including Zone Out, which features a man discussing having sex with a jar of salsa), claims it can write content five times faster than a human and has created a whole art exhibition in Amsterdam. Google ended up firing an engineer who claimed its chat AI, LaMDA, had a soul.
So will AI replace graphic designers? Not quite. Let’s have a look at what design tools can do, what they can’t, and what the implications for the industry are.
How AI design works
Just as AI in other fields has had some teething problems—think racist chatbots or self-driving cars that find traffic cones deeply confusing, AI in graphic design is in its early stages.
There are plenty of tools that offer a series of choices and can spit out a series of design options such as sample logos. Other applications can help you choose a color scheme or help design a website. These applications tend to either have a limited scope, or to only generate a limited number of fairly formulaic designs. Some barely use ML, let alone AI. But increasingly sophisticated text-to-image AIs, which create images on the basis of user prompts, are doing far more than that.
A typical path for text-to-image applications is:
- the user supplies the AI with a text prompt
- the AI turns those words into a semantic map—a web in which words are embedded, grouped and worked into a napkin sketch
- detail is added to this sketch in a process called diffusion, with the AI predicting the artwork based on your initial prompt and data the AI has surveyed
- because the AI has an enormous reservoir of data to draw on, it can iterate huge numbers of semi-random takes on the prompt.
The drawback? Some versions that are easily accessible produce decidedly raw images, like DeepAI’s free-to-use text to image generator. Others, such as the more sophisticated Midjourney, remain in beta. And the cutting-edge photorealism of Open AI’s Dall-E and DALL·E 2 and Google’s Imagen remain tantalizingly out of reach for most designers—Dall E-2 has a lengthy waiting list, and Imagen remains closed to the public. Stable Diffusion might just be a game changer—it’s open-source, allowing users to build on its code, and unfiltered, meaning no images are off the table.
AI can play a crucial role in graphic design
As we can see, AI and ML currently have two key uses in design:
- as a fairly basic tool for shaping logos and basic print or digital layouts
- as a more creative tool for developing for creating art based on prompts
The second use is arguably more immediately transformative. Applications such as DALL·E 2 can produce thousands of iterations of a single idea, and increasingly produce artwork of real quality. In this way, text-to-art AIs perform the classic role of a machine—producing work at scale in a short timeframe that might be tiresome for human workers to create.
Machine learning for designers also gives human creatives the gift of time, enabling designers to take a break, take on more work, or experiment with different ideas while software does the grunt work. Text-to-art prompts also open up design to people who might have great ideas, but not the sketching expertise to realize them. “DALL·E is a wonderful tool for someone like me who cannot draw,” says artist Benjamin Von Wong. “Rather than needing to sketch out concepts, I can simply generate them through different prompt phrases.”
Today’s AI design tools are ideally suited to visual brainstorming. By pulling an almost infinite number of possible interpretations together, they offer designers numerous chances to find the best look for a given brand. AI and design work together particularly effectively when designers wish to mix different concepts and styles, producing improbable combinations together from a simple search bar. Indeed, that aspect AI design tools may prompt a shift in the way we consume images, with surprising combinations of different styles and objects increasingly common—something we’re already seeing in the metaverse.
As AI grows increasingly good at producing original art, its role will increasingly move beyond producing mock-ups and concept art. Some AI art is simply stunning, and the ease with which it can be produced might one day make image libraries seem like an anachronism.
AI and ML tools offer broader advantages too. By compiling huge amounts of data and spotting patterns, they let researchers find out more about their target market. Their ability to put multiple spins on the same product makes them effective instruments of both localization (producing different branding for different countries) and personalization (creating products that are different for every consumer).
But AI and ML are only part of the picture
While AI looks certain to change aspects of graphic design, it’s important to note that it actually only automates a relatively small part of the process of creating a set of designs. A brief generally begins with dialogue, and the client laying out what they think they want—a kind of “prompt” that may bear little relation to the design they finally decide upon.
The creative process will be shaped by the brand’s competition, its target market and the formats its designs need to be displayed in. It will move through mood and inspiration boards, before visual brainstorming produces a range of sketches from that creative process. Once the best ideas are then chosen, they will be reworked repeatedly from feedback, and concepts and mockups gradually honed to perfection.
As we’ve seen, “producing a range of sketches” is something AI can excel at. But that still leaves numerous design steps that AI can’t yet perform effectively.
AI art has other flaws. Its role in the current enthusiasm for strange juxtapositions may feel like a positive force, but might soon start to feel like a cliche, with too much artificial art leading tastemakers back towards the real. AI works are limited by their input: they can be biased and, because they draw inspiration from existing sources, their ownership is not always clear, bringing concerns both around licensing images across different markets and platforms, and in terms of plagiarism.
None of these drawbacks mean AI and ML can’t become a crucial part of your design process. But they’re not all-encompassing solutions: instead, they’re best thought of as resources that you can point at certain design problems.
AI can transform design—but it won’t replace designers
Artificial intelligence and machine learning will have a key role in graphic design’s future, and the tools already on the table are capable of producing appealing and surprising takes on the same topic. That can be a boon for brands, empower people who have creative impulses but limited sketching skills, and give creatives a chance to step back from the canvas.
But human inspiration and the soft skills of negotiation and client management are going nowhere: AI is good at making an image, but it can’t yet sell a story. As AI tools mature, that may change, but for now a human designer, with their unique viewpoints and expertise, remains essential for anyone hoping to bring a brand’s vision to life.