Artisans bring AI tools to the workbench

One of the most curious objects in last year’s Venice Glass Week was a milky white blown-glass vase. At least, it looked at first glance like a vase, and an unassuming one, but there was something strange about it: instead of opening up at the top, it was sealed over, and instead of being hollow, it was solid and heavy. 

It was, in short, a functionless vessel; or, as the title put it, an “Imposter Vessel”. 

The object was based on images provided by a generative artificial intelligence tool in response to a text prompt. The original AI image was displayed alongside the physical object, along with some of the words used in the prompt: “hand blown glass, vase, white, sand, contemporary, on a plinth at an exhibition”. 

The AI tool (in this case, Dall-E 2) had not yet learnt what designers call the “human context”: in this case, that a vase is not a vase without an opening and an interior space suitable for the display of flowers. 

The woman behind the vase is Rezzan Hasoğlu, a Turkish designer and glassblower based in London. When she submitted the idea to the Venice Glass Week committee, she included only the AI image. Without the written explanation, she says, it would have been hard to say whether the image was of a real glass object.

Image of an ovoid glass vase with sealed internal void
Gloved hands work on the vessel at a glassworks

Displayed on a small plinth alongside other exhibits, many of which were more aesthetically adventurous, the main reaction that “Imposter Vase” seemed to evoke among visitors was a shrug of bemusement. With hindsight, though, it may have been a harbinger of things to come. 

The development of AI has been extraordinarily swift. Some designers and craftspeople are dismissive of its ability to replace human creativity; others are anxious about its impact on their livelihoods. Hasoğlu, however, is one of an increasing number who are embracing AI and finding new ways of incorporating it into their work processes.

AI enables human-built machines to “do more, faster and cheaper”, says Christian Laforte, vice-president of research at Stability AI, a company specialising in open-source generative AI. “I like to think of AI assistants as increasingly helpful ‘colleagues’ that are growing rapidly in skill and experience.” 

Some have even been made to look human, such as Ai-Da, who resembles an old-fashioned automaton but with robotic arms and who is touted by its (or her) creators, the gallerist Aidan Meller and his team, as “the world’s first ultra-realistic artist robot”. Ai-Da made a foray into design for the London Design Biennale 2023, using generative AI to produce a series of tableware designs, including a jug, a teapot and cutlery. 

The “world’s first AI designer” is claimed to be Tilly Talbot, an AI model that takes the form of a young woman on a screen — who talks with a British accent. Tilly is the brainchild of Amanda Talbot, the founder of the Sydney-based Studio Snoop. At Miami Art Week in December, Studio Snoop exhibited “House of Tilly”, five physical design prototypes developed by human designers and makers in collaboration with Tilly, who first made her public appearance at Milan Design Week last April.    

For her part, Hasoğlu says she is “not an AI artist”, while the AI tool she used did not understand the practicalities involved in glassmaking. “I wanted to challenge that gap between the new emerging AI tools and centuries of craft knowledge and material knowledge.”

A woman examines an interactive screen next to a tapestry of a polar bear among flowers

Her process involved entering the text prompt quoted above into Dall-E 2 and refining it through different variations. Overall, she has calculated that she took 101 iterations to reach the final image. She then used software to create a computerised 3D model of the AI image, before travelling to Notarianni Glass in Dorset to make the physical object.

Hasoğlu altered aspects of the design — the AI had suggested an orangey-purple colour scheme, despite the inclusion of “white” in the original prompt. She rejected this: “It looked like a really bad egg.” She did, however, decide to preserve the “smoky” effect of the AI image, which she achieved using the very un-digital process of sandblasting. There were aspects of the final object that no digital tool could replicate. In particular, her work “needs to be seen and touched . . . it’s very difficult to convey this online”.

For Andrea Mancuso, an Italian designer, one of the fascinations of AI-generated imagery is its unexpectedness. This includes its mistakes; the “absence of pragmatism” that comes from operating in a different way from people. Mancuso believes that a grasp of AI is vital for the next generation of designers.

As a teacher at the New Academy of Fine Arts in Milan, he pushes his students to use it. When it comes to visualising ideas, the latest version of Midjourney (version 5.2, released last June) is a “game changer”, he says, especially for students who have a less visual imagination. Writing down precise prompts for ChatGPT can also help them to develop their ideas and “deepen their research”. He emphasises, though, that AI is a tool, not a substitute for human creative choice and direction. The wrong way to treat Midjourney is to use it “like a slot machine, creating picture, picture, picture . . . because that’s just a complete loss of time”.

Recent results of collaboration between humans and AI were on display at Crafting Dimensions: Dreams of Augmented Gems, an exhibition in Amsterdam last November that explored digital interventions in traditional craft. The curator, Natalia Krasnodebska, works as a “technical support for engineers in cryptocurrencies”; in her spare time, she makes jewellery. The idea for the exhibition came partly from arguments she had with artist friends who were worried AI was going to take their jobs.

Krasnodebska contributed a jewellery collection she designed with the aid of Dall-E. “AI can be really helpful in generating ideas,” she says, but her own “input as a curator was vital at every step”. In particular, she says, “AI does not (yet) have human taste. If a designer finds something subjectively good, others may too. AI does not understand this, or what it means to experience things from the individual, embodied point of view.”

Also shown in the exhibition was “Spawns”, a collection of silver spoons made in a collaboration between jeweller Gio Sampietro and future-focused design studio OIO. The designers began the project in early 2021. They devised a tailored process involving a GAN, or “generative adversarial network”, the forerunner of more complicated AI models such as Midjourney, which they “fine-tuned” by training it on a carefully assembled database of antique spoons. The images produced by the GAN were blurry but they were enough to use as a starting point for refinement through other software.

For Sampietro, the attraction of AI is its capacity to produce surprising results. But he finds generating images through simply entering prompts unsatisfactory. It leads to “zombie images . . . without the soul”. True meaning, he argues, can only come through human context, storytelling and “warmth”.

Dinuo Liao is one designer who has been trying to create AI that can take human tastes into account. For his graduate project at Delft University of Technology last year, he asked people to rate images of lamps generated by AI for desirability and visual appeal. He used the most highly rated images to fine-tune the AI model, which was then able to generate images of lamps better liked by participants.

Although Liao’s project was conducted on a small scale, it suggests AI could be trained to adapt its creations to suit human preferences — and help designers predict which works are likely to be popular. After all, likes and dislikes are also data.


At the beginning of 2024, AI-generated words and images clearly have the capacity to play a role in the design process. A logical next step would be the move to AI in three dimensions. In November last year, for instance, Luma AI, a California-based start-up founded in 2021, released a research preview of its new generative AI tool, Genie. Users can enter text or image as prompts, and Genie will come up with new three-dimensional models. These can be downloaded in a “mesh” format, which maps the texture across the object, thereby allowing them to be further manipulated and refined by the user. At present, Luma’s products are particularly popular in gaming and other virtual environments. But the meshes are capable of being 3D-printed and so brought out into the physical world.

In a design context, it is easy to imagine how a tool such as Genie might one day bypass the need for 2D generative AI. Compared with the latter, there is still some way to go. “Midjourney is [like] a professional designer,” says Barkley Dai, Luma’s product and growth lead. “The current Genie model is still probably in elementary school.”

One challenge is the relative size of the databases: Dai estimates that the number of 3D models available on which to train an AI tool is in the millions, while that of 2D images is in the billions. However, researchers at Luma and elsewhere are developing ways of reconstructing 3D models from 2D images, which could greatly enlarge the 3D database.

When I spoke to Dai over Zoom, he gave me a demonstration of Genie, asking it to generate a flower vase. He was not satisfied with the first attempt, in glass (the transparency apparently confuses the algorithm). And one vase came out with a flower suspended inside — another example of AI’s lack of pragmatism. The results in ceramics were more convincing, even though the forms were conventional. The choice of artistic styles is much more limited in 3D than 2D, again due to the small size of the database. Dai tried a vase in a “pixelated” style, of the sort found in retro-styled computer games: Genie came up with four vases patterned with distinct, if somewhat crude, blocks of colour.

Side and top view of colourful vase and flowers

Israel-based company 3DFY.ai has developed its own AI model, which can, according to its website, generate “unique” 3D models from text prompts to a standard “similar to what a modeller would produce”. So far, the publicly available version of 3DFY Prompt only allows the generation of models in eight categories, including ottomans, tables and swords (for virtual gaming) but the company is exploring ways to expand its database.  

As with the fine arts, a more advanced version of an AI model such as Genie could also conceivably be used at the creative stage by designers or artists to depart from conventional thinking and generate more innovative objects or textures. “The power of genuine AI,” Dai says, “is it can design something new that people have never seen before.”

The other side of Luma’s business involves technology that can “capture”, or digitally scan, a 3D object and upload a complete virtual model of it to the computer. This technology is not yet perfectly linked up to Genie. If and when it is, it could enable an artisan, for instance, to train a Genie-like AI model on 3D captures and textual descriptions of their artworks so it could then produce new models in the same style. “It’s not possible yet,” says Dai, “but that’s something we’re working towards.” 

These images could then be turned into 3D models and printed in materials such as PLA plastic, aluminium or resin.

With AI developing at a rapid pace, enthusiasts predict a world in which all physical entities could be endlessly multiplied. Everything formerly “man-made” in the physical world, even down to the artisanal vase on your dining table, might be conceived, adjusted for physical constraints and 3D-printed in newly discovered materials.

For now, the experience of designers such as Sampietro, Krasnodebska, Mancuso and Hasoğlu indicates that one of the hardest parts for AI in design and craft remains in the very last stages of the process: adjusting for human needs and tastes and then realising it in the physical world. While the prototypes of the Spawn spoons were 3D-printed, remaking them in silver required the use of lost wax casting, a specialist process that predates AI by several millennia.

Whether new technology will ever be able to replace artists and designers, either as creative thinkers or as makers, is hard to tell. “One thing that I’ve learnt in the past,” says Dai, “is that we often make predictions about things — and all of those predictions are wrong . . . like with the development of AI.”

Find out about our latest stories first — follow @FTProperty on X or @ft_houseandhome on Instagram



Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link