Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Nicholas Negroponte

In our History of Generative Art series, we focus on Nicholas Negroponte, an American architect and researcher who became one of the key figures in discussions around automation, interactivity, and machine intelligence in visual culture. In the late 1960s, he founded the Architecture Machine Group at MIT (later MIT Media Lab). His work explores how computers could enhance human creativity and serve as collaborators in artistic and design processes.

Nicholas Negroponte, Source: uexternado.edu.co

Nicholas Negroponte, born in 1943 in the United States, studied architecture at MIT in the 1960s. During this time, he began exploring how machines could generate drawings, make decisions, and respond to human input. He developed the idea of an "architecture machine", a collaborative system in which a human and a computer work together in mutual learning and interactive design, replacing traditional linear workflows with a more dialogical relationship between man and machine.

In 1967, he co-founded the Architecture Machine Group at MIT. This research lab focused on developing new ways for humans and computers to collaborate in design. Supported by institutions such as DARPA, the group worked on early experiments in human-computer interaction, including systems that could interpret natural language, learn from user behavior, generate visual output, and simulate aspects of the design process.

The exterior of the new MIT Media Lab, opened in 2010, Source: Unmadindu/Wikipedia

One of their early projects, URBAN5, was developed in 1973. It was an experimental system that allowed users to interact with a computer through both language and spatial modeling. URBAN5 tested ideas about interactive design and communication, enabling the machine to observe and reflect the user’s design criteria and decisions.

Negroponte’s work during this period was influenced by cybernetics and by thinkers such as Gordon Pask. He incorporated concepts of feedback, adaptation, and learning into the lab’s systems. In 1970, he published The Architecture Machine, which outlined his vision of a computer that could participate in architectural design through mutual learning. In 1976, he followed this with Soft Architecture Machines, which explored how design systems could become more adaptive and responsive to context and user behavior.

The Architecture Machine by Nicholas Negroponte, 1970, Source: mitp-arch.mitpress.mit.ed

Being Digital by Nicholas Negroponte, 1995, Source: wikipedia.org

After the experimental phase of the Architecture Machine Group, Negroponte established the MIT Media Lab in 1985, together with former MIT president Jerome Wiesner. As director, he helped build the lab into a multidisciplinary research environment focused on media, design, and technology. The lab supported work in areas such as wearable computing, tangible user interfaces, affective computing, and personalized digital media.

In 1995, Negroponte published Being Digital, his most widely known book. It introduced the concept of “bits over atoms”, arguing that digital information would shape our world more fundamentally than physical materials. The book included predictions about the convergence of media, the growth of personalized content, the widespread adoption of the internet, and changes in how people access and share information.

In 2005, he launched the One Laptop per Child (OLPC) initiative, a non-profit organization that helps to create and distribute affordable, durable laptops for children in the Global South. The project developed the XO laptop, a low-cost, energy-efficient device.

In recent years, Negroponte has continued to be involved with the Media Lab in an advisory capacity and continues to speak on topics such as the future of AI and brain-computer interfaces.

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - DeepDream by Alexander Mordvintsev

This month celebrates the 10th anniversary of DeepDream, an important development in the history of AI-generated art. Introduced in May 2015 by Alexander Mordvintsev, a researcher and artist based in Zurich, DeepDream was one of the first widely recognized applications of neural networks for image generation. It played a major role in popularizing AI art, inspiring a wave of experimentation that continues among many artists today.

Just before DeepDream: 1000 classes #3, 2015/01

Just before DeepDream: 1000 classes #2, 2015/01

Mordvintsev began exploring computers at an early age, which later led him to study computer science and computer vision. In 2014, he joined Google, where he started working with neural networks. As Google supported independent research, he used this opportunity to experiment with these networks and to reverse-engineer image-trained models in order to better understand how they process and interpret visual data.

Starting in January 2015, Mordvintsev began experimenting with AlexNet, a widely used image classification model developed in 2012. During this period, he generated a number of images that were later grouped under the series Just Before DeepDream, which demonstrated the early visual characteristics of the technique. Although these images lacked high resolution and fine detail, they already indicated the potential of deep neural networks for image generation.

On the night of May 18, 2015, he was inspired to run an experiment. Instead of allowing the network to process images layer by layer, he interrupted the process and manipulated the mid-layers, coaxing the network to enhance and generate features from partial data. The experiment produced strange, dream-like visuals that resembled hallucinations. He shared the results on Google’s internal network, where colleagues quickly recognized their significance.

Sunset, 2015/05

The program became known as DeepDream, and it is characterized by highly detailed textures and a surreal aesthetic. This visual style is similar to the psychedelic art movement, which emerged in the 1960s in connection with counterculture and often featured swirling patterns and distorted images. An interesting aspect of DeepDream’s early outputs was the frequent appearance of recurring visual motifs, especially dog and cat faces and eyes. This was due to biases in the ImageNet dataset used for training, which included a large number of dog categories.

DeepDream was released as open-source software in July 2015. Mordvintsev wrote a blog post about the method with Christopher Olah and Mike Tyka on Google’s research site. The publication and the images got wide attention online and were featured in many articles. A growing number of users started using DeepDream through websites and apps, which helped more people learn about machine learning and neural networks.

Father Cat, 2015/05

DeepDream is looking for patterns everywhere #2, 2015/05

DeepDream's public release also influenced several artists to explore neural networks. Mike Tyka, who had worked at Google, began creating art with DeepDream. Mario Klingemann also experimented with the medium, especially during his time at Google’s Machine Learning Residency. In 2016, the Gray Area Foundation for the Arts in San Francisco organized DeepDream: The Art of Neural Networks, one of the first exhibitions focused on artworks made using neural networks. The show included works by Alexander Mordvintsev, Mike Tyka, Memo Akten, James "Pouff" Roberts, and others.

DeepDream demonstrated how machine learning could be used to generate images and contributed to broader public interest in neural networks.

Many of Alexander Mordvintsev’s DeepDream works, created before the program was open-sourced, are now part of prominent collections such as Jediwolf, @BlueBeam9611, and more.

DeepDream: The Art of Neural Networks, Exhibition by Gray Area Foundation for the Arts, San Francisco, 2016, Source: grayarea.org

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Video games & Art

In our History of Generative Art series, today we explore the development of the relationship between video games and art. From their early stages, video games have used elements from artistic traditions and have also influenced contemporary visual culture. Over time, improvements in hardware, software, and the spread of the internet have expanded the tools available to developers and artists from early platforms like the Atari VCS to later consoles such as the PlayStation 3.

Jason Rohrer, Passage, 2007, Source: MOMA

The first experiments with video games took place in the 1950s and 1960s. Projects like Tennis for Two (1958) and Spacewar! (1962) used simple graphics to create basic interactive experiences. These early games were developed in research environments and were not made for a wide audience. Although they were limited in technology and purpose, they showed the potential of computers for interactive entertainment and helped set the foundation for later development of video games as a medium.

Video games began to develop in the 1970s and 1980s alongside the rise of digital technology. In the early 1980s, personal computers and consoles from companies such as Atari, Mattel, Coleco, Commodore, and Apple became available for home use. Although these early systems were technically limited, they introduced a unique visual language. These games used simple graphics, limited colors, and minimal sound. Games such as Pac-Man, Pitfall!, and Zaxxon became icons of early digital design, with stylized environments and abstract forms that required players to interpret elements beyond what appeared on the screen.

Tennis for Two, 1958, Source: wikipedia.com, Brookhaven National Laboratory (BNL)

Pac-Man, 1981 (Atari VCS), Courtesy Bandai Namco

After the early boom, and even though arcades and personal computers thrived, the home console market crashed in 1983 due to an oversaturation of low-quality, rushed, and poorly designed games. In 1985, new home consoles from Nintendo, Sega, and Commodore were introduced which led to a renewed interest in the market. These systems used 8-bit microprocessors and introduced a distinct graphic style with a blocky, hand-drawn look, now referred to as pixel art. Developers worked with strict resolution limits, so each pixel had to be used carefully. Games like The Legend of Zelda and Super Mario Bros. 3 created well-known characters and settings using simple and efficient visual design.

By the early 1990s, personal computers had become common in many homes. During this time, consoles like the Sega Genesis and Super Nintendo competed by highlighting technical features such as resolution, memory, and sound. Improvements in these areas such as higher resolution and expanded color palettes made it possible to show more detailed characters and environments, which supported more visually complex games. Developers were able to create more expressive visuals and experiment with different styles, for example, the unusual animations in Earthworm Jim or the painted look of The Legend of Zelda: A Link to the Past.

Super Mario Bros. 3, 1988 (Nintendo Entertainment System), Source: romhacking.ne

The Legend of Zelda: A Link to the Past, 1991 (Super Nintendo Entertainment System), Source: ign.com

In the late 1990s, video games began moving from 2D to 3D environments. New hardware and the use of CD-ROMs made it possible to create larger and more detailed game worlds. Games like Super Mario 64, Tomb Raider, and The Legend of Zelda: Ocarina of Time introduced new ways for players to move through and view space. The blocky textures and angular models of this time showed the technical limits of early 3D games but also represented a move toward more cinematic and structured game design.

In the 2000s, as the technology improved, games showed more realistic graphics, told more complex stories, and created larger worlds often shaped by player choices. Titles like Shadow of the Colossus, BioShock, and Uncharted 2 combined storytelling with player exploration. During this time, motion-capture, dynamic lighting, and detailed sound became important parts of the visual experience. Video games were able to support many different artistic styles, from the simple design of Okami to the open-world creativity of Minecraft.

In the 2010s, museums and cultural institutions began to recognize video games as a form of art. In 2012, the MOMA in New York added 14 video games to its collection, including Pac-Man, Tetris, and Portal. The Smithsonian American Art Museum also presented an exhibition called The Art of Video Games, which focused on how games developed visually and emotionally over the past 4 decades.

Today, technologies such as virtual reality, augmented reality, and online spaces have given artists new ways to work with ideas like space, interaction, and storytelling. Video games have also had an impact on both culture and contemporary art. The visual style developed through them over many decades is now appearing in areas such as digital art, installations, and performance.

The Art of Video Games, Exhibition at Smithsonian American Art Museum, 2012, Source: wikipedia.com (Blake Patterson from Alexandria, VA, USA)

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Cybernetics and Urban Design

Cybernetics, developed in the mid-20th century by Norbert Wiener, focuses on the study of control and communication in mechanical, biological, and social systems. Its principles have influenced various disciplines, including not only science and technology, but also the arts, humanities, and urban design. In the context of city planning, cybernetics helped in understanding cities as dynamic and adaptive systems shaped by continuous feedback between inhabitants, infrastructure, and environmental conditions.

Cities are complex, open systems in which numerous interdependent variables interact and influence one another. Through the use of feedback loops, urban environments can be designed to respond in real time and adapt to user behavior, for example, in self-regulating traffic systems. Cybernetics introduced concepts such as adaptive planning, iterative development, and systems capable of changing in response to their own performance.

Cedric Price and Joan Littlewood, Fun Palace, 1960s, Courtesy Cedric Price fonds/Canadian Centre for Architecture Collection, Montreal

In the 1950s and 1960s, several theorists began investigating how cybernetic principles could influence urban design. One important figure was a British architect, Cedric Price. In the 1960s, he proposed the Fun Palace, a flexible cultural space that could be reconfigured according to users’ needs. This interactive environment, equipped with sensors, cranes, and movable components, enabled users to shape the space through their activities. Although it was never built, the Fun Palace influenced later projects such as the Centre Pompidou in Paris, which adopted several of its concepts related to flexibility and transparency.

The Hungarian artist Nicolas Schöffer also explored the application of cybernetic theory to urban planning. In 1965, he proposed the Cybernetic City which composed of three interconnected zones managed by a central control unit. The design allocated areas for work, contemplation, residence, and leisure. Schöffer believed that advances in automation and cybernetics would increase leisure time and reshape urban life.

Nicolas Schöffer, The Cybernetic City, 1965, Source: Adagp, Paris – Éléonore de Lavandeyra-Schöffer, 2018

Constant Nieuwenhuys’s long-term project New Babylon, developed between 1959 and 1974, also incorporated ideas from cybernetics and the space age. He imagined a future society liberated by automation, where urban space would be a platform for creativity and play. The environment was to be fluid, constantly reconfigurable and structured around a network of interconnected spatial units. These units would allow transformation based on the users’ desires and movements.

Constant Nieuwenhuys, New Babylon, 1959-74, Source: medium.com

While many cybernetic urban designs remained on paper, some were partially realized. Expo ’70 in Osaka represented a large-scale project to apply cybernetic concepts to architecture and urban design. Japanese architect Kenzo Tange designed the expo grounds as a modular and expandable structure, which was capable of growth and change. Several national and corporate pavilions featured interactive technologies, feedback systems, and real-time information displays.

In the present day, cybernetic principles continue to influence urban planning and architecture, particularly through the development of smart cities and responsive environments. Real-time, data-driven systems, such as adaptive traffic signals, responsive public lighting, and environmental monitoring technologies, are now implemented in cities such as Barcelona, Singapore, and Amsterdam.

Kenzo Tange, Japan World Exposition in Osaka, 1970, Source: messynessychic.com

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - Change of Basis by Kjetil Golid

Change of Basis by Kjetil Golid builds on the artist’s ongoing exploration of algorithmic partitioning, using previous experiments and transforming them into three-dimensional subdivided cubes. The series, consisting of eight works—four in color and four in monochrome—showcases precise, architectural arrangements with contrasting colors and almost moving patterns.

Change of Basis A1, 2023 by Kjetil Golid

Golid is a generative artist and system developer from Norway. With a background in system development, data analysis, graphic design, and mathematical logic, he uses algorithms and data structures to create generative art. After discovering Processing, Golid began exploring ways to turn algorithms and frameworks into visual forms. He builds visually complex works from minimal inputs, always emphasizing the underlying order.

His art often uses the element of the cube. This form has appeared throughout art history as a symbol of structure, order, and spatial exploration, from early modernist painting to contemporary and conceptual art. The conceptual artist Sol LeWitt, for example, systematically generated all possible incomplete open cubes, creating many unique structures. @kGolid, in a similar approach, uses cubes, squares, and rectangles to explore construction and deconstruction, order, and variation, showing how many different results can emerge from a simple set of rules.

Incomplete Open Cubes, 1974 by Sol Lewitt

One of Golid’s most recognized projects using this approach is Archetype, launched on Art Blocks in 2021. It began with an algorithm to partition rectangles into smaller rectangles. Over time, Golid refined the visuals, creating works where repetitive patterns appear within a complex and unpredictable structure. This method highlights a sense of order within apparent chaos. A selection of works was showcased on the ZKM Cube in 2021.

Archetype #255, 2021 by Kjetil Golid

His 2023 series, Change of Basis, continues this earlier partitioning algorithms by starting from a two-dimensional structural pattern, then slicing it diagonally in three dimensions to create dynamic, gradient-like cross-sections and cathedral-like forms. Despite the strict geometry, the use of color and playful lines softens the artwork.

The series reflects influences from Moebius’ science fiction illustrations and Pushwagner’s perspective techniques. It consists of eight pieces: four works (Series A) are colorful, while four pieces (Series B) are in monochrome. One work was first showcased at our Node to Node Art Salon in Paris in 2023.

Golid’s work can be found in collections such as curatedxyz, VincentVanDough, DCinvestor, and many more.

Change of Basis B1, 2023 by Kjetil Golid

Change of Basis A2, 2023 by Kjetil Golid

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - British Cybernetics

In our History of Generative Art series, we now focus on British cybernetics, which began influencing art and education in the mid-to-late 1950s. As cybernetics developed as a field, its principles found applications beyond science and engineering. In Britain, these ideas became particularly prominent in art schools, where many artists and educators began exploring systems thinking, feedback loops, automation, and the relationship between human perception and technology

 Gordon Pask, Musicolour, 1953–57, Source: Pask: Comment 1971, p.85, fig.31.

Cybernetics is the study of communication and control in both machines and living systems, emphasizing feedback mechanisms and self-regulation. Developed by Norbert Wiener in the late 1940s, it explores how systems process information and adjust to change. This provided a theoretical framework that allowed artists to explore interactive and systematic approaches to art.

The Independent Group w/ P. Smithson, E. Paolozzi, A. Smithson, N. Henderson, 1956, Source: researchgate.net

Members of the Ratio club, Source: researchgate.net, Courtesy Wellcome Library for the History and Understanding of Medicine.

One of the most influential groups in British cybernetics was the Ratio Club, started in Cambridge in 1949. The club brought together psychiatrists, neurologists, mathematicians, engineers, and other scientists to discuss information processing in brains and machines. Members included Alan Turing, Jack Good, Ross Ashby, Grey Walter and many more.

 The influence of cybernetics on British art emerged in the mid-to-late 1950s, particularly among artists affiliated with the Independent Group, one of the first in the UK to explore mass media, technology, and popular culture. Key members, including Richard Hamilton, Nigel Henderson, Eduardo Paolozzi, and John McHale, saw science and technology as shaping visual culture. An early example is Hamilton’s Man, Machine & Motion (1955), which examined human-machine relationships in areas like space exploration and underwater diving.

 Roy Ascott with students at Ealing Art School, 1963, Source: npg.org.uk

British art schools played a key role in integrating cybernetics into art. Richard Hamilton’s Basic Design course at King’s College was influenced by Bauhaus principles and technology. His student, Roy Ascott, expanded these ideas at Ealing Art School, developing the Groundcourse, which emphasized systems thinking, interactivity, and audience participation. He later introduced a similar program at Ipswich Civic College, where educators like Stephen Willats and Stroud Cornock further explored audience engagement and technology. By the late 1960s, polytechnics merged art and technology, encouraging interdisciplinary collaborations.

 

Gustav Metzger was an important figure in early cybernetic and computer-controlled art. In 1961, he wrote a manifesto about the use of automated, computer systems in artistic creation. His work explored self-regulation, feedback loops, and destruction as a creative process. One of his pioneering pieces, Five Screens with Computer, was an environmental installation where computer-controlled screens dynamically altered their visual content. Unfortunately, due to the high costs, the project was never realized.  

 Gordon Pask, The Colloquy of Mobiles, 1968, Source: medienkunstnetz.de

Another pioneer of the UK cybernetics movement was Gordon Pask. His MusiColour Machine (1953) was an interactive light display that responded to live music. The system adapted over time, encouraging performers to change their behavior to elicit new visual effects. His later work, Colloquy of Mobiles (1968) was an interactive installation that featured mobile robotic sculptures that communicated with each other and with the audience through light and movement.

Stephen Willats also integrated cybernetics, systems theory, and audience participation into his work. One of his key works, Meta Filter (1973-75), was an interactive installation that explored how people perceive and process visual information. By incorporating decision-making processes and adaptive responses, the piece used real-time audience input to modify its structure.

Cybernetic Serendipity was the first major international exhibition focused on cybernetic art, held at the Institute of Contemporary Arts in London in 1968, curated by Jasia Reichardt. It showcased works that incorporated automation, generative processes, and interactive systems. The exhibition featured artists, engineers, and scientists, including Gordon Pask, Nam June Paik, Nicolas Schöffer and Edward Ihnatowicz.

British cybernetic art introduced a new way of thinking, viewing art as a dynamic, interactive system with feedback between creator and audience. This legacy continues to shape contemporary digital and generative art.

 Stephen Willats, Meta Filter, 1973-75, Source: stephenwillats.com

Cybernetic Serendipity at the Institute of Contemporary Arts in London, 1968, Sources: medienkunstnetz.de

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR'S CHOICE - Arnolfini Series by Harold Cohen

The Arnolfini Series by Harold Cohen is a collection of plotter drawings created in 1983 using the artist’s autonomous art-generating program, AARON, which is considered one of the first AI art systems, developed in the early 1970s. The works, featuring complex forms and intricate line drawings, highlight AI's early capacity for autonomous decision-making, simulating a form of creative autonomy.

Arnolfini Series, 1983 by Harold Cohen (part of Kate Vass Galerie’s collection)

Harold Cohen was a British artist born in London in 1928. He began his career as a painter after graduating from the Slade School of Fine Art. His first solo exhibition took place at the Ashmolean Museum in Oxford in 1951, and by the mid-1960s, he was recognized as one of Britain’s leading painters. In 1966, he was selected to represent Great Britain at the Venice Biennale.

Portrait of Harold Cohen with SGI System in 1995 © Harold Cohen, Boston Computer Museum, 1995. Courtesy of Hank Morgan & Harold Cohen Trust

In 1968, at the peak of his painting career, he moved to the United States to take up a visiting lecturer position at the University of California, San Diego. There, he learned the programming language FORTRAN, and in 1971, he joined Stanford University’s Artificial Intelligence Laboratory for a two-year residency. During this period, he began exploring the potential of machine-generated art, which eventually led to the development of AARON.

Detail of the British Pavilion at the 1966 Venice Biennale, showing Cohen’s painting on the right, Courtesy Harold Cohen’s archive

He began developing AARON in the early 1970s and continued to refine it until his passing in 2016. AARON uses a set of predefined rules created by Cohen to autonomously generate images, enabling the program to independently make decisions on composition and color palette. The title refers to the biblical figure who was anointed as a speaker for his brother Moses and raises questions about the way artistic creation is often regarded as a form of communication with the divine.

Arnolfini Series, 1983 by Harold Cohen

The software has generated artworks using Cohen’s custom-built plotters and painting machines, which translate computer commands into line drawings on paper with automated pens and apply color using brushes. In its early years, AARON could only produce monochrome line drawings, which were sometimes hand-colored by Cohen.

The 1979 exhibition Drawings at SFMOMA featured a “turtle” robot creating drawings in the gallery, Source: Collection of the Computer History Museum, 102627449

The Arnolfini Series, created in 1983, is an example of Harold Cohen’s early machine-generated drawings before AARON incorporated figuration. These plotter drawings in ink on paper were produced using Cohen’s software and presented at his exhibition at the Arnolfini Gallery in Bristol in the same year. The works consist of fragmented lines and geometric structures, demonstrating the system’s ability to generate non-representational compositions based on programmed rules.

Arnolfini Series, 1983 by Harold Cohen

In the years following this series, Cohen continued developing AARON, modifying the program so that it could choose and apply colors autonomously and generate real-world forms, including foliage and human figures. He continued working on AARON until his death in 2016, refining its capabilities and further exploring the relationship between artistic decision-making and computational processes.

One piece from the series is part of @ArtOnBlockchain’s collection.

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART  - cyberfeminism

In today’s History of Generative Art, we introduce cyberfeminism, which emerged in the early 1990s alongside the rise of the internet, drawing from third-wave feminism, postmodernism, and media theory. It represents an international group of female thinkers, coders, and media artists who critique, theorize, and reshape digital spaces, new media technologies and explore how technology can challenge existing power structures.

VNS Matrix, A Cyberfeminist Manifesto for the 21st Century, 1991, Source: vnsmatrix.net

In the 1990s, the term cyberfeminism was independently coined by British theorist Sadie Plant and the Australian artist collective VNS Matrix. VNS Matrix combined art with French feminist theory to challenge the male-dominated internet. They published the Cyberfeminist Manifesto for the 21st Century as a statement against traditional norms. Meanwhile, Plant explored how digital technology could shape feminist theory, describing the internet as an inherently feminine, non-linear, and self-replicating space.

Before the term was coined, feminist theorists and artists were already examining the relationship between gender and technology. One of the most significant early influences was Donna Haraway’s 1985 essay, A Cyborg Manifesto, which contributed to the development of cyberfeminism. In this work, Haraway explores the cyborg—a hybrid of machine and organism—as a figure that transcends gender and race. She argues that cyborgs challenge traditional hierarchies and offer a future for overcoming biological determinism while promoting androgyny as an ideal.

Donna Haraway, A Cyborg Manifesto, 1985, Source: Macat Library

Old Boys Network, First Cyberfeminist International, 1997, Source: monoskop.org

Cyberfeminism grew in prominence throughout the 1990s, influencing artists and theorists from North America, Australia, Germany, and the UK. A key moment in its history was the 1997 First Cyberfeminist International, organized by the Berlin-based collective Old Boys Network. Held at Documenta X in Kassel, Germany, the event brought together 38 women from 12 countries.

An important artist in the movement is Linda Dement, who challenges gender norms in her artwork Cyberflesh Girlmonster (1995), where she created interactive bodies from scanned female body parts that viewers could engage with, triggering sounds, videos, and texts. Faith Wilding, an American artist, created the Recombinants (1992-1996) collage series, featuring hybrid compositions of machines, plants, humans, and animals, exploring their interconnectedness with technology.

Linda Dement, Cyberflesh Girlmonster, 1995, Source: lindadement.com

Cornelia Sollfrank, Female Extension, 1997, Source: medienkunstnetz.de

Lynn Hershman Leeson, a multimedia artist who often uses interactive technology and film, challenges traditional notions of gender in her work. One of her most iconic projects, CyberRoberta (1996), features a doll with cameras embedded in its eyes, live-streaming its perspective to a website. By seeing through the doll’s eyes, viewers are extending their vision through a technological surrogate.

Cornelia Sollfrank, a member of the Old Boys Network, subverted the Hamburger Kunsthalle’s 1997 Net art competition with the project “Female Extension”. She created 288 fictional female artists with unique identities and submitted them as participants. Using an algorithm, she generated 127 Net art pieces by recombining HTML material from the web. Even though there was a “high number” of female applicants, the prizes ultimately went to male artists. Sollfrank later revealed her intervention, exposing gender bias in digital art.

Lynn Hershman Leeson, CybeRoberta, 1996, Source: altmansiegel.com

By the end of the decade, several critical issues within cyberfeminism emerged. The early optimism that the internet would become a universally liberating space was seen as overly idealistic. In response, in the 2000s, Technofeminism emerged as an evolution of cyberfeminism, integrating science and technology studies with feminist theory to examine gendered aspects of technology beyond the digital world.

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - Learning Nature by David Young

“Learning Nature” (2018–2020) by David Young challenges common perceptions of AI by prioritizing aesthetics over efficiency and large-scale data processing. Using his own photographs of flowers as training data, Young observes AI’s behavior and how the system interprets visual elements. The resulting images reflect AI’s own way of interpreting natural forms.

Learning Nature (b63e,4400-19,4,10 ,12,44,30), 2019 by David Young

David Young is a New York-based artist and designer who works with technologies such as artificial intelligence and quantum computing. He has a background in computer science, visual studies, and design. Although he experimented with AI as early as the 1980s, he began working with it more seriously in 2016.

Portrait of David Young

He has always been interested in understanding AI—how it works and what it can be—while also addressing common misconceptions. To explore this, he created a course called Designing AI at Parsons School of Design in New York. In the course, he brought together students from different backgrounds who had no technical knowledge of AI to see if they could develop a more accessible way to discuss the subject. The course also influenced his own approach.

Learning Nature (z14,cr,A), 2018 by David Young

To make AI more approachable, he wanted to move away from the usual applications of this technology, which are often focused on optimization, efficiency, and large datasets. Instead, he considered focusing on beauty and aesthetics and starting the training on a smaller, more personal scale. This approach allowed him to study how AI works in a different way.

‘Learning Nature’ book by David Young

One of his early series, which led to his “Learning Nature” series, was called “Flowers”. These images were created using GANs trained on a small dataset of photographs he took of flowers at his farm in Bovina, Upstate New York. He deliberately chose this subject to differentiate his work from conventional AI applications and to reference the artistic history of the region, including the Hudson River School.

Flowers (b22,1582,2), 2018 by David Young

Later, he refined his method by photographing flowers against a neutral background so that the AI would focus on the subject rather than the entire scene. This process led to the “Learning Nature” series, created between 2018 and 2020 which allowed him to examined how AI learns and behaves.

Learning Nature (b63f,2600-19,4,9,1 0,27,44), 2019 by David Young

He noticed that its learning process sometimes resembled human creativity, particularly in how it repeated patterns and struggled with certain aspects. However, he also observed that the system lacked the ability to understand and complete fine details. The images it produced were not accurate but reflected AI’s own way of interpreting natural forms.

Learning Nature (b63e,4400,19,1,29 ,16, 54,11), 2019 by David Young

The creation of images that do not precisely reflect reality has a long history. Young’s series can be compared to 17th- and 18th-century Dutch flower paintings, where floral arrangements included species that would not have bloomed at the same time.

A Vase with Flowers, 1613 by Jacob Vosmaer, Source: metmuseum.org

Similarly, the AI-generated images in Learning Nature depict flowers that could not exist in nature, combining elements in ways that are visually convincing but not botanically possible.

The Learning Nature series can be found in many esteemed private collections, such as:

Jediwolf, Delronde, SeedPhrase, @NGMIoutalive and many more.

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Cybernetics

In our History of Generative Art series, we focus on cybernetics, which emerged as an interdisciplinary field in the mid-20th century. Cybernetics, developed by Norbert Wiener, studies how systems regulate themselves through feedback and communication. Its principles have been applied in various areas, from governance to artificial intelligence, architecture, and design. The field also influenced art, inspiring interactive and generative works, especially the cybernetic sculptures of Nicolas Schöffer.

Norbert Wiener at MIT, Source: researchgate.net

McCulloch (right) and Pitts (left) in 1949, Source: Semantic Scholar

Cybernetics is the study of systems that regulate and communicate within themselves and their environment through feedback loops. It examines how information is processed, controlled, and transmitted in both living organisms and machines. It began with the publication of Cybernetics: Or Control and Communication in the Animal and the Machine by American computer scientist, mathematician, and philosopher Norbert Wiener in 1948. However, before Wiener’s book, key ideas had already been explored by Warren McCulloch and Arturo Rosenblueth, whose contributions are often overlooked.

In 1943, McCulloch and Walter Pitts introduced a theoretical model of neural network in their paper A Logical Calculus of the Ideas Immanent in Nervous Activity, showing how neurons process information using binary logic. Around the same time, Rosenblueth, working with Wiener and Julian Bigelow, studied purposive behavior and feedback mechanisms in both biological and artificial systems.

The Macy Conferences 1946–1953, Source: press.uchicago.ed

Cybernetics: Or Control and Communication in the Animal and the Machine by Norbert Wiener, Source: sciencebookaday.com

Cybernetics was further developed at the Macy Conferences, held between 1946 and 1953. These conferences brought together researchers from various disciplines, including biology, psychology, and engineering. Participants such as Warren McCulloch, John von Neumann, and Norbert Wiener, explored commonalities in feedback mechanisms across different systems.

 The term  “cybernetics” was chosen by Wiener and his colleagues from Ancient Greek kubernētikē, meaning "steersmanship" or "governance." The term appears in Plato’s Republic and Alcibiades, where the metaphor of a steersman is used to signify the governance of people.

Project Cybersyn Operations Room, Source: mitpress.mit.edu

This idea of control and regulation was soon extended to governance, urban planning, and social organization, particularly during the second wave of cybernetics, which emerged in the late 1960s and 1970s and emphasized self-organization and adaptive systems. One of the most well-known projects was Chile's Project Cybersyn in the early 1970s, an ambitious plan to use cybernetic methods to manage the national economy. The system aimed to optimize decision-making through real-time data analysis and feedback loops. The project was discontinued after the 1973 military coup.

Nicolas Schöffer, CYSP1, 1956, © Adagp, Paris – Éléonore de Lavandeyra-Schöffer

Cybernetics also influenced the arts. Nicolas Schöffer was one of the first artists to integrated it into his work. In the 1950s, he created cybernetic sculptures equipped with sensors that allowed them to react to sound, movement, light, or even meteorological phenomena in their environment. Schöffer saw his works as living systems that constantly moved, changed, and adapted to their surroundings.

Contemporary art continues to be influenced by the legacy of cybernetics. In 2024, the French creative studio u2p050 created Action Reaction, a work exploring cybernetics and the role of machines in shaping the world. The piece reflects on the dual nature of cybernetics: how it can be used for control and regulation, but also for creativity, efficiency, and artistic expression. The work is currently on view at our Interthinking exhibition at Budapest Art Factory.

Studio u2p050, Action|Reaction, 2024

 Cybernetic principles have influenced many other fields. In the 1950s and 60s, they helped shape systems theory, informing models of complex interactions in ecosystems and organizational structures. In medicine, biofeedback techniques were developed to help people control bodily functions like heart rate and muscle tension. Cybernetics also played a key role in early AI research, contributing to the development of neural networks.

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - Excessize by Roope Rainisto

"Excessize", by Finnish AI artist Roope Rainisto, is a critical exploration of American consumerism and fast food culture. Created in 2022, the series serves as a precursor to his renowned “Life in West America”. By blending the visual style of vintage Americana photography with AI technology, Rainisto contrasts the once-idealized golden age of American fast food with its later-recognized negative consequences.

Frozen Fresh (from Excessize series), 2022 by Roope Rainisto

Roope Rainisto has worked as a designer, specializing in UX, for years while also pursuing photography as a hobby. He began working with AI in 2021 and has since been experimenting with the combination of photography and AI technology to create post-photographic artworks.

Portrait of Roope Rainisto

In 2023, he created his renowned “Life in West America” series, which introduced a new visual language that was unprecedented at the time. Influenced by early American vintage color photography, he used custom diffusion models to combine the aesthetics of the traditional medium with AI. The series focuses on individuals living in rural America, exploring their hopes, dreams, lives, and aspirations.

The Gathering (from Life in West America), 2023 by Roope Rainisto

The precursor to this series is “Excessize”, created in 2022 as both a study and an exploration. Compared to Life in West America, which takes on a more optimistic tone, Excessize presents a more critical perspective. Here, Roope Rainisto also employs the visual style of vintage Americana photography alongside AI to capture the post-war United States atmosphere—a time marked by a significant rise in food production following years of rationing.

Emergent Culture (from Excessize series), 2022 by Roope Rainisto

Fast food restaurants, which began to expand in the 1950s, catered to individuals across all socioeconomic levels, from presidents to the working class, promoting values of efficiency and uniformity. Many pop artists, such as Andy Warhol, Claes Oldenburg, and Roy Lichtenstein, depicted fast food culture as a symbol of mass consumption and commercialism, often emphasizing its ubiquity and cultural significance.

French Fries and Ketchup, 1963 by Claes Oldenburg, Source: whitney.org

Rainisto’s Excessize captures both the celebration of post-war economic growth and a critique of its negative societal impacts. Through repetition and uniformity in the images, Rainisto highlights the consequences of global cultural homogenization, focusing on its role in eroding local traditions.

Service Culture (from Excessize series), 2022 by Roope Rainisto

This theme mirrors the ongoing expansion of AI, which moves towards automation, affecting authenticity and individuality. Just as the mass production of fast food promotes uniformity, the use of AI in the corporate sector prioritizes efficiency and scalability, potentially leading to similar homogenization in content and product creation.

Days Of Meat (from Excessize series), 2022 by Roope Rainisto

This series urges a reconsideration of how mass production and AI shape our society and cultural heritage.

The works from Excessize can be found in prestigious collections such as those of @RaoulGMI, @balon_art, @maxkarlan, and many more.

 

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Processing

In our History of Generative Art series, we would like to spotlight Processing. Developed in 2001 by Casey Reas and Ben Fry, it was founded on a revolutionary idea: making programming accessible to artists and designers to create visuals, animations, and interactive works with code. This tool has influenced a generation of generative artists, shaping the way code is used as a medium for artistic expression.

Casey Reas, Process 14 (Software 2), 2012

The early goal of Processing was to make coding accessible to artists, architects, and designers, while also providing a platform for those already proficient in programming to create images. Casey Reas and Ben Fry envisioned Processing as a bridge between graphic design and computer science, allowing people to sketch ideas using code in the same way an artist sketches with a pencil.

The duo drew significant inspiration from earlier work at the MIT Medialab, especially from the Visual Language Workshop (VLW) and Design By Numbers (DBN), both led by John Maeda. While DBN provided a minimalist coding environment, its fixed canvas size and grayscale output were limiting.

Jared S Tarbell, Happy Place, 2004

Casey Rheas, Process Compendium, 2004 - 2010

 Processing built on DBN’s strengths while eliminating its limitations, allowing users to work with color, larger canvases, and even 3D graphics. It simplified many of the complexities that often make traditional coding daunting for beginners, offering a minimalistic interface that encouraged users to start coding without being overwhelmed by technical details. This simplicity contributed significantly to its popularity.

 Besides its simplicity, another important element was that it was completely open-source and free to use. The developers encouraged sharing of the software and the works created with it through the internet. While it initially attracted a relatively small group of users, its community grew rapidly as various forums and platforms emerged, providing spaces for users to discuss their work, seek help, and share creations. 

Manoloide, aaaaa, 2018

Many prominent generative artists began their art careers using Processing. Its developer, Casey Reas, became a leading artist, creating intricate abstract works through code. His project “Process Compendium” (2004-2010) explores generative art by defining simple elements and behaviors that interact to produce dynamic, evolving visuals. A descriptive text guides the software, leaving space for interpretation.

One of the most important artists using processing after its launch is Manoloide. The Argentinian visual artist and talented coder, Manoloide has been using Processing since the early 2010s. Exploring the intersection of organic and artificial elements in his art, he created his most iconic works, rich in variety and vibrant colors. One of the early works like Mantel Blue or ‘aaaaa’, 2018, was exhibited at ‘Automat und Mensch’ in Zurich in 2019. His ‘Last Flowers’ series (2021), is an excellent example of how a Processing masterpiece emerges.

Manoloide, Last Flowers Red, 2021

Jared S. Tarbell began working with Processing in the early 2000s. Using the software, he developed numerous abstract, geometric artworks that blend mathematical elegance with artistic precision, making each piece feel like a spiritual experience generated by a computer. Last year, he presented his “Substrate Subcenter” (2024), building on his famous “Substrate series” from 2003, which he created using an early version of Processing.

The list of artists who have worked with Processing is extensive. Numerous books delve into this topic, such as “Processing: A Programming Handbook for Visual Designers and Artists” by Casey Reas and Ben Fry, published by MIT Press or the “20th Anniversary Community Catalog”, released in 2022, which highlights the community-building aspect of the software, to name a few.

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - PAL by Marcelo Soria-Rodríguez and Iskra Velitchkova

PAL is an early conceptual work created in 2022 by Marcelo Soria-Rodríguez and Iskra Velitchkova, using a generative system to explore love—its origins, roots, and the consequences of how we navigate its inescapable flaws. While relationships are often marked by unpredictability and vulnerability, these very qualities make them valuable and deeply human. In our constant pursuit of the ideal, PAL raises a question: can a generative system, designed for perfection, ever truly capture the essence and richness of imperfect love?

PAL, 2022 trailer by Marcelo Soria-Rodríguez and Iskra Velitchkova

PAL is the first collaborative project between Marcelo Soria-Rodríguez and Iskra Velitchkova, two generative artists from Spain. In their art, both focus on the relationship between humans and machines and how this interaction affects our understanding of ourselves. Instead of just using technology as a tool, they work with it as a collaborator to explore human perception, emotions, and limitations.

PAL, 2022 trailer by Marcelo Soria-Rodríguez and Iskra Velitchkova

The project began as a performance at the 2023 Art SG in Singapore. The artists developed a generative system that produced video works, which were then transferred onto a VHS tape, bringing the digital outputs into a physical medium. The tape was played on an analog TV in a continuous loop for four days. To introduce external interference, the artists placed a small neodymium magnet inside the media player.

PAL, 2022 trailer by Marcelo Soria-Rodríguez and Iskra Velitchkova

The video on the VHS tape was built around a circle—a fundamental shape representing unity, an idealized union, and the concept of perfection. These circular compositions, created by a generative system, symbolized the human pursuit of balance, harmony, and the idea of a perfect relationship.

PAL #7, 2022 by Marcelo Soria-Rodríguez and Iskra Velitchkova

Over four days, as the video played, the magnet gradually distorted and erased parts of the visuals, replacing them with static and glitches. Unlike a digitally programmed effect, this interference was unpredictable and irreversible. The fact that the distortion came from the physical world further emphasized the contrast between the controlled precision of generative systems and the uncertainty of real life. This transformation mirrored the imperfections that shape human relationships—unpredictable, evolving, and beyond control—highlighting that their beauty lies not in perfection, but in the very flaws, changes, and uncertainties that make them real.

The entire performance was recorded on the VHS tape, which the artists digitized to create 100 unique digital pieces.

PAL can be found in many esteemed private collections such as Karatekid, TheFunnyGuys, Lemonde2d, iki_jima, Kate Vass Galerie and many more.

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Quantum Art

David Young, Quantum Drawings, 2021

In this History of Generative Art series, we explore the fascinating emergence of quantum generative art. While generative art has long been powered by classical computing, quantum computing offers entirely new possibilities, opening a new chapter in the evolution of generative art, where the principles of quantum mechanics become both a tool and an inspiration for artists.

Quantum computing is a revolutionary paradigm in computation that uses quantum mechanics to process information in fundamentally different ways from classical computers. It started in the 1980s, when Richard Feynman and Yuri Manin proposed the idea of using quantum mechanical principles to build more powerful computational models.

Interior of an IBM Quantum Computer, Source: IBM

Unlike classical computers, which use bits (0 or 1), quantum computers use qubits, which can exist in a state of 0, 1, or both simultaneously due to superposition. They also exploit entanglement, where qubits become interconnected, allowing changes in one qubit to instantly affect another, regardless of distance. These properties enable quantum computers to solve complex problems at speeds far beyond classical computers.

Antony Gormley, Quantum Clouds, 1999, Source: wikipedia.org

Jonathon Keats, Quantum Entanglements, 2011 © Jonathon Keats

Since the 1980s, major tech companies and research institutions such as IBM Quantum and Google’s Sycamore have been working on building and improving quantum computing technology. Even though these machines are still in the early stages, many artists are exploring how quantum principles can be applied to create innovative artworks.

Artists first focused on visualizing quantum physics concepts rather than directly using quantum mechanisms. Works such as “Quantum Man” (2007) and the “Buckyball Series” (2009) by Julian Voss-Andreae explored ideas like wave-particle duality, entanglement, and the ephemeral nature of matter. Other important examples include Antony Gormley’s “Quantum Clouds” (1999) and Jonathon Keats’s “Quantum Entanglements” (2011), both of which draw inspiration from quantum mechanics and translate these concepts into physical forms.

Libby Heaney, Ent-, 2022, Source: libbyheaney.com

Libby Heaney is among the first artists to directly incorporate quantum computing into her art since 2019. Her Lumen Prize winning piece “Ent-“ (2022) is a 360-degree immersive installation where she reinterprets the central panel of Hieronymus Bosch’s The Garden of Earthly Delights by animating her scanned watercolor paintings using her custom quantum code.

Another early adopter is David Young, who began exploring the field of quantum computing in 2021, focusing on understanding how this technology works. He started the project using quantum computers from IBM Quantum to produce outputs that were processed through his custom code, presented as “Quantum Drawings” (2021).

David Young, Q 7, 2021 (From Quantum Drawings series)

Pindar Van Arman collaborated with quantum computing researcher Russell Huffman on “Quantum Skull” (2022), combining AI-generated art with a quantum computing technique. The process involved mapping qubits to pixels, encoding color values via qubit rotations, and recycling qubits across circuits due to hardware limitations. 

Quantum computers are still in their early stages of development, and access to them is limited. Creating quantum algorithms for artistic purposes requires a deep understanding of both quantum mechanics and programming, making this a highly specialized field.

Pindar Van Arman and Russell Huffman, Quantum Skull, 2022, Source: vanarman.com

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - Alternatives by Espen Kluge

Alternatives by Espen Kluge is a generative portrait series created through custom code and data collected from photographs. It employs algorithmic processes to reinterpret human faces, focusing on individuality, structure, and improvisation. The work represents a key moment where traditional portraiture meets contemporary, generative digital methods.

Espen Kluge, she really thinks about it, 2019

The Norwegian-born composer, visual artist, and creative coder Espen Kluge has always been interested in the inward, exploratory, and meditative yet chaotic, qualities of the creative process. This sensibility is reflected in his approach to portraiture, where the human face becomes a surface to explore inner states, emotional mapping, and algorithmic interpretation.

Portraiture has a long and complex history in art. Traditionally, it has served to record, idealize, or convey the status of its subjects. In the 20th century, artists began to challenge the idea of likeness, exploring how form and abstraction could also express inner character. This shift is especially evident in the works of Russian Constructivist artists like Naum Gabo, who emphasized structure, geometry, and the dynamic use of space. The influence is reflected in Kluge’s portraits.

Naum Gabo, Head No. 2, 1916 (enlarged version 1964), Source: tate.org.uk

Kluge’s renowned Alternatives portrait project began in 2013 while he was working on an interactive portrait logo for his website and developed a piece of JavaScript code that transformed photographs into colorful, vector-based images. He returned to this idea later, in 2019, refining the code, which works by looping through image pixels, selecting some semi-randomly, and connecting them with lines. The final result depended on the source image, so he selected portraits with expressive features, strong lighting, and rich skin tones.

Espen Kluge, slowly passing, 2019

The final series comprises 100 unique portraits, each of which feels emotive, especially when compared to traditional generative art, which can often be cold, geometric, and repetitive. The images draw from both figurative and abstract traditions, emphasizing form and rhythm. Vibrant colors and compositional geometry convey a sense of motion and psychological depth. The works were first exhibited in 2019 at Kate Vass Galerie in Zurich, curated by ArtNome. It was one of the early shows to present NFTs alongside physical artworks in a gallery context.

Espen Kluge, little ability, 2019

Three years later, in 2022, he revisited the same dataset with a new algorithm, resulting in the Lyrical Convergence series. Shown for the first time at the “Dear Machine, Paint for Me” exhibition in Zurich, this new body of work moved further into abstraction, translating facial data into forms that suggest emotional states and the inner nature of these figures rather than specific human features.

Espen Kluge, Lyrical Convergence #50, 2022

While Alternatives retained a visual link to portraiture, Lyrical Convergence introduced more abstract, fluid, and centralized forms. The compositions focus on organic shapes, monochrome backgrounds, soft color palettes, and unified line structures, drawing on the aesthetics of lyrical abstraction, especially the works of Georges Mathieu. 

8/ The two series form a conceptual and technical pair. They explore how the same data can yield different outcomes through changes in algorithmic structure. This transformation from Alternatives' structured figurative forms to the abstractions of Lyrical Convergence illustrates Kluge’s interest in the mechanics behind how we perceive images and how they are constructed through generative processes.

 

Pieces from Kluge’s Alternatives series are part of the collection of Bharat Krymo, Museum of Crypto Art, WangXiang, and many more.

Espen Kluge, Lyrical Convergence #49, 2022

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - Five Self-Portraits at Ages 18, 30, 45, 60, and 70 by Nancy burson

"Five Self-Portraits at Ages 18, 30, 45, 60, and 70" by Nancy Burson is a conceptual study of identity and aging. Created in collaboration with MIT in the late 1970s, the series presents five portraits of Burson, in which she envisioned how her face might change at different ages. This work laid the conceptual groundwork for her pioneering age-morphing technology, influencing both her own artistic practice and the development of facial manipulation techniques used today.

Aging at 18, 30, 45, 60, and 70, 1976, video work

Motion pictures and animations often use morphing techniques to create seamless transitions between images. This geometric interpolation method has existed for centuries, with early examples like tabula scalata and mechanical transformations. One of the earliest and most effective techniques was “dissolving,” developed in the 19th century, where images gradually transitioned, for example, a landscape shifting from day to night.

Aging at 18, 30, 45, 60, and 70, 1976, fine art prints

A pioneering figure in this field was Nancy Burson, an American artist and photographer born in 1948, who was the first artist to use digital morphing technology in art. Burson became interested in digital technology in the 1970s after visiting the 1968 exhibition “The Machine as Seen at the End of the Mechanical Age” at Museum of ModernArt. The show inspired her to explore technological processes in her own practice.

Portrait of Nancy Burson

During this time, Nancy Burson envisioned software that could age a user's face. In 1976, she contacted Nicholas Negroponte at MIT, where she began working with the Architecture Machine Group. At MIT, researchers had recently developed a rudimentary digitizer that allowed a computer to process and manipulate facial images, and with Thomas Schneider, they started working on Nancy’s idea.

The set up at MIT which was their first version of a digitizer

The foundation for this pioneering age-morphing technology was laid with the work “Five Self-Portraits at Ages 18, 30, 45, 60, and 70”. Created in 1976, this piece features five portraits of Nancy Burson at different ages. For this series, Burson worked together with a makeup artist to envision how she might appear in the future.

Documentation of the Aging Self Portraits, 1976, vintage mounted photograph

This work, along with other studies she created of herself, allowed her to explore the aging process and contributed to the development of the software, which she patented in 1981 as “The Method and Apparatus for Producing an Image of a Person’s Face at a Different Age”.

Aging Study, 1976, drawing

The software simulated the aging process by scanning the viewer’s face, allowing them to interactively adjust data points for features such as the eyes, nose, and mouth. An aging template would then apply transformations corresponding to the viewer’s facial structure. This triangular grid remains the standard morphing grid in the industry today, used in AI software and applications like Snapchat.

Original Morphing Grid, 1981

Using this software, Nancy Burson altered the faces of celebrities, models, and even a Barbie doll to address broader social and political themes. She later created the “Age Machine”, an interactive work where visitors could see future versions of themselves. Her research had implications beyond the art world. Law enforcement agencies adopted her technology to create age-progressed images of missing children, aiding in their identification and recovery.

 Her work has been exhibited in major institutions, including the Museum of Modern Art, the MET Museum, the Whitney Museum, the V&A Museum, and the Centre Pompidou. Currently, pieces from her oeuvre are on view in LACMA’s exhibition “Digital Witness”

 

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Generative Music

Aaron Penne and Boreta, Rituals - Venice #874 & #384, 2021, Source: ritualsalbum.xyz

In today’s History of Generative Art series, we explore a fascinating genre, generative music. Unlike traditional composition, generative music is created through algorithmic or rule-based systems that enable continuous variation and non-repetitive structures. Its development has closely followed technological advancements, from early computer music experiments to contemporary AI-generated works.

The tables of Wolfgang Mozart's 'Musikalisches Würfelspiel’, Source: ai.gopubby.com

Max Mathews, Pioneer in Making Computer Music, Source: nytimes.com

The idea of musical randomization dates back to the 18th century with the Musikalisches Würfelspiel, which allowed composers to create pieces by rolling dice. As technology advanced, music’s mathematical nature made it ideal for computational approaches. Unlike visual media, audio data requires significantly less computational power, enabling digital manipulation well before real-time video processing became feasible.

The first breakthrough came in 1956, when Max V. Mathews developed MUSIC I, the first digital audio synthesis program at Bell Laboratories. In the 1980s and 1990s, David Cope created Experiments in Musical Intelligence (EMI), a system that analyzed and recomposed music in the styles of Bach, Mozart, and Rachmaninov. The results were so convincing that some listeners mistook them for human compositions.

Lejaren Hiller at the Experimental Music Studio, Source: burchfieldpenney.org

While scientists were developing computational approaches, several composers had already begun experimenting with algorithmic and self-generating musical processes. Lejaren Hiller composed the “Illiac Suite” (1957), the first complete work generated by a computer algorithm, using stochastic methods and rule-based selection.

The term “generative music” was popularized by Brian Eno in 1995, when he collaborated with SSEYO’s Koan software to create music that continuously evolved based on predefined rules. Eno described this genre as an ever-changing system rather than a fixed composition. His 1996 release “Generative Music 1” demonstrated these principles, by showcasing tracks generated using the software. In 2024, a documentary about him, Eno, was released, following a similar approach, it uniquely re-edits itself for each screening. Trailer.

Brian Eno, Generative Music 1, 1996, Source: progarchives.com

Artists now use programming languages and algorithms to create rule-based systems, sometimes combining their generative visuals with generative sound. “Rituals – Venice” (2021) is an audiovisual work that merges Aaron Penne's visuals with Boreta's meditative music. The calming, immersive experience was released on Art Blocks. Its code generates a continuous, non-repeating output for over 9 million years.

AI has also influenced generative music. Deep learning techniques, such as those used in DeepMind WaveNet (2016), have enabled realistic neural synthesis of sound. In 2023, Patten released “Mirage FM”, one of the first full-length albums composed entirely with Riffusion, a new advanced technology. The album transforms written descriptions into dreamlike compositions that blend pop, techno, hip-hop, R&B, and ambient.

AI and blockchain have enabled new methods of composition, distribution, and interaction in generative music. While the field faces challenges in areas such as creative control, authorship, and long-term viability, the genre remains a subject of study and experimentation in both artistic and technological contexts.

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - Mistaken Identity by Mario Klingemann

Mistaken Identity by Mario Klingemann represents a complex exploration of neural networks through the deliberate manipulation of their internal structures. This work consists of three videos created using generative adversarial networks (GANs) and neural networks. Exhibited at the ZKM in Karlsruhe during the Beyond Festival in October 2018, the triptych investigates how neural networks interpret visual information.

Mario Klingemann, Mistaken Identity, 2018, a video triptych

Mario Klingemann is known for his pioneering work in generative and AI art, with preferred tools including neural networks, code, and algorithms. His artistic practice reflects a systematic approach and curiosity about understanding complex systems. Klingemann often dissects systems such as neural networks, analyzes their components, and reconstructs them to explore patterns and to recreate and understand the system’s behaviors.

Portrait of Mario Klingemann

The approach of deconstructing and reconstructing forms to understand underlying structures and patterns has historical precedents in art. Classical painters such as Leonardo da Vinci conducted anatomical dissections to gain a deeper understanding of the human form. Similarly, Cubist artists such as Pablo Picasso and Georges Braque fragmented objects into geometric components, breaking them down and reassembling the forms to depict multiple perspectives.

Mistaken Identity, 2018 at ZKM in Karlsruhe during the Beyond Festival in October 2018

To understand how AI functions and perceives human forms, Klingemann developed a method called "neural glitch”. This technique involves deliberately introducing errors into fully trained GAN models. Initially, GANs are trained to generate realistic human faces. Once the models achieve a near-perfect state, Klingemann disrupts key neural components.

Mario Klingemann, Mistaken Identity – Chapter #1, 2018

These disruptions include small changes, such as altering, deleting, or exchanging the training weights, which impact the numerical values that determine how the model synthesizes images. These changes result in significant and often unpredictable alterations to the output.

Mistaken Identity, 2018 at ZKM in Karlsruhe during the Beyond Festival in October 2018

They affect both texture and semantic levels, changing the arrangement of facial features and altering finer details, such as skin tone or shading. These changes produce outputs ranging from slightly altered portraits to entirely abstract forms. This process represents how neural networks work, interpreting and perceiving human faces differently than humans.

Mistaken Identity, 2018 at Future U exhibition at RMIT Gallery, Swanston Street, Melbourne, Australia in 2021

The final result consists of three nearly two-hour-long videos, presented as a triptych for the first time at ZKM in Karlsruhe in 2018 and the Future U exhibition at RMIT Gallery, Swanston Street, Melbourne, Australia in 2021

The first chapter of the series, Mistaken Identity - Chapter #1, is part of the Seedphrase collection.

Read More
Agnes Ferenczi Agnes Ferenczi

HISTORY OF GENERATIVE ART - Metaverse

Krista Kim, Mars House, 2020, Source: sothebys.com

The concept of virtual worlds and digital identities existed long before the term 'metaverse' became widely recognized. The origins can be traced back to early science fiction, which later inspired practical applications like gaming, virtual reality, and social digital spaces. Today, the rise of Web3 culture, including decentralized platforms, blockchain-based digital assets, and interactive virtual experiences, further integrates these virtual worlds into everyday life.

The metaverse refers to a network of virtual spaces where users interact through digital avatars. These environments support social interaction, digital economies, gaming, education, and more. It incorporates technologies such as virtual reality (VR), augmented reality (AR), blockchain, and traditional online platforms to create digital worlds.

Ivan Sutherland, Sword of Damocles, 1968, Source: researchgate.net

The foundations of virtual reality were laid in the 19th and 20th centuries with early discussions about immersive artificial environments. In 1938, French playwright Antonin Artaud described the illusory nature of theater as 'virtual reality' in his collection of essays, “The Theater and Its Double”. Another early theoretical concept is found in the writings of Stanley G. Weinbaum, whose 1935 short story “Pygmalion’s Spectacles” envisioned a pair of goggles that could transport users into an interactive world.

Stanley G. Weinbaum, Pygmalion’s Spectacles, 1935, Source: sothebys.com

By the 1960s, technology began catching up with these ideas. In 1962, Morton Heilig developed the Sensorama, an early immersive multimedia machine simulating a motorcycle ride with 3D visuals, sound, vibrations, and scents.. Later, in the 1960s, Ivan Sutherland created the first VR headset, the "Sword of Damocles", which featured mechanical tracking and wireframe graphics.

Morton Heilig, Sensorama, 1962, Source: historyofinformation.com

Neal Stephenson, Snow Crash, 1992, Source: goodreads.com

In the 1980s, the idea of simulated realities appeared more frequently in literature. In 1981, Vernor Vinge’s novella "True Names" introduced a virtual world accessible through a computer interface. In 1984, William Gibson’s "Neuromancer" described a digital space called "The Matrix", where users could navigate a connected network. These books, along with films like "Tron" (1982), and "Ready Player One" (2018) further explored these themes.

 The term "metaverse" was first introduced in Neal Stephenson’s 1992 novel, "Snow Crash”. In this book, the metaverse was a virtual space where people could escape from reality and interact in a 3D environment using avatars.

As virtual worlds evolved, artists and scientists explored their artistic potential. Active Worlds, launched in 1995, was an early online 3D platform where users could navigate virtual spaces, interact through avatars, and build their own environments. In 2005, @HerbertWFranke created the Z-Galaxy within Active Worlds. Unlike most virtual spaces, it featured mathematically generated structures, galleries, and a sculpture park. It first showcased Franke’s own work but later included exhibitions by other artists and scientists.

Herbert W. Franke, Z-Galaxy, 2005, Source: art-meets-science.io

Around the same time, video game developers experimented with multiplayer virtual spaces. In 1986, LucasArts released "Habitat", an early example of a graphical multiplayer virtual world that allowed users to interact using digital avatars. In 2003, the launch of "Second Life" brought the metaverse concept closer to reality. Users could create digital identities, purchase virtual land, and engage in social and economic activities.

Linden Lab, Second Life, 2003, Source: indiatimes.com

Steven Lisberger, Tron, 1982, Source: theguardian.com

The industry gained new momentum in the 2010s, with advances in computing power and graphics. In 2011, Palmer Luckey developed the Oculus Rift prototype, reigniting interest in VR. Companies like Oculus, Microsoft, Sony, and HTC introduced VR headsets that expanded the use of virtual reality beyond gaming, including business, education, and industry applications.

In 2021, Facebook rebranded as Meta to focus on metaverse development. Around the same time, Web3 technologies like decentralized finance (DeFi), NFTs, and blockchain governance were gaining traction. Companies and creators explored NFT-based digital ownership, driving interest in virtual land and new digital economies.

As technology advances, the metaverse has the potential to reshape how we connect, create, and experience digital life.

Read More
Agnes Ferenczi Agnes Ferenczi

COLLECTOR’S CHOICE - Early AI video Works by Memo Akten

“We are all connected. To each other, biologically. To the earth, chemically. To the rest of the universe atomically.” – Neil deGrasse Tyson

Between 2018 and 2020, Memo Akten produced two early AI series: the "BigGAN Study" and "We Are All Connected", both representing his early explorations with generative adversarial networks. Both series feature audio-reactive visual compositions that respond to original music composed by the artist himself without the use of AI. The work reflects the interconnectedness of all forms of life and matter, from microcosm to macrocosm.

We are all connected #04 - Underworld, 2018-2020 by Memo Akten

For more than a decade, Memo Akten has been working with various AI models in his art. His works often focus on intelligence in nature, intelligence in machines, perception, consciousness, neuroscience, ritual, and religion. This interdisciplinary approach allows him to connect technology, science, and spirituality in his moving images, sounds, large-scale responsive installations, performances and audio-reactive visual compositions.

Portrait of Memo Akten

The history of audio-reactive visuals originates in the early 20th century, when experimental filmmakers began exploring methods to visually represent sound. Oskar Fischinger and Mary Ellen Bute developed early abstract animations synchronized with classical and experimental music. In the 1960s and 1970s, John Whitney further pioneered this field through the use of analog and digital computers to generate real-time synchronized visuals, where visual patterns corresponded directly to sound input.

BigGAN Study #2 - It's more fun to compute, 2018 by Memo Akten

Memo Akten’s practice builds on this tradition. "BigGAN Study" is one of his first projects where he combined video with audio, producing audio-reactive visual compositions. Akten began working on this series in 2018, using an AI model known as BigGAN, developed by Google DeepMind. Compared to earlier GAN models, BigGAN had the capability to generate more detailed and diverse images due to its use of larger datasets and high-dimensional latent spaces.

BigGAN Study #4 - BigGAN Madness, 2018 by Memo Akten

BigGAN had the downside of relying on a large-scale dataset sourced from the internet. As Memo Akten became concerned about the legal and ethical implications of using inputs without consent, he began building and training his own AI models in 2017, sourcing images from the public domain, CC0 licenses, and his personal archive to ensure compliance with ethical standards. His second series, “We Are All Connected”, was created using this custom model between 2018 and 2020.

We are all connected #05 - Mad World, 2018-2020 by Memo Akten

The uniqueness of these series is that Memo Akten later revisited the videos, dubbing the visuals with original music he composed. For some of the videos, he created the music in the 1990s as a teenager using a 486 or Pentium computer, a 14-inch CRT monitor, FX pedals, an electric guitar, and Cakewalk software. The moving images in the works are synchronized to match the tempo and feel of these compositions.

We are all connected #06 - Plug me in, 2018-2020 by Memo Akten 

Both series reflect Memo Akten’s philosophical concerns, emphasizing the interconnectedness of all forms of life and matter, from microbes to galaxies. The works are presented as continuously evolving images accompanied by Akten's own music. In both series, rather than relying on random latent walks, he employed deliberate, controlled explorations of the latent space.

We are all connected #08 - Avril 14, 2018-2020 by Memo Akten

The resulting works serve as meditative audiovisual experiences, that reflect the complex, interconnected nature of existence.

We are all connected #05 – Mad World is part of the Delronde collection.

Read More