Who made it?
When I think about who created the material segments that makeup figures #11 and 12, I am unsure if I can claim to be the creator. In designing the other sculptures, my process was drawing up the shapes in a CAD program and then have a 3D printer realizing those shapes into what I consider actual reality by melting plastic into a shape corresponding to the form drawn up in virtual reality. In that case, I would still have no problem saying: "I made that." However, in this case, I am no longer so confident. My conception of the tool seems to have crossed a threshold in some ways moving it from tool to entity. An entity that must have agency allowing it to create the forms that make up Figures #11 and 12.
My perception seems to be influenced by the abstraction level. When working in CAD, I have no direct physical sensation of the shape I draw in virtual space and how it could transmit to my physical reality, yet I interpret it as such. The software simulates something I can understand; three-dimensional space. And even if I have no connection to how this is achieved in the software, its familiarity makes me feel empowered. It operates in line with my self's image as something occupying space in the world by creating simulations of objects seemingly doing the same. In the generative algorithms hidden workings, this understanding is no longer accessible; it does what it does without direct manipulation from the user. It is unlikely that any spatial considerations are part of the algorithms it develops to create the shapes. That doesn't occur until it is on a two-dimensional computer screen and perceived by a human sensorium that can interpret it as such.
The question arises: What is the difference between the representation of forms generated in this way and those generated by the simpler algorithms, but algorithms nonetheless, used to draw any shape in simulated three-dimensional space?
The dance of agencies
When designing in CAD software, I create objects in simulated 3d space. These are then visualized for me on a two-dimensional screen, and although I have no tactile impressions of the objects, I can, through the mediation of a mouse and keyboard, "handle" the object. The software I use is the mechanism allowing me to draw up the item in a simulated space.
In doing so, my agency is just one of many, even if counting only the agency of people. The virtual object I am manipulating is created by knowledge and effort resting on the shoulders of many. The silicon chips it runs on, the coding language created by some developers allows others to write code that can be compiled for those chips. The mathematicians, engineers, designers, and architects developed the conventions the software adheres to play their part. 1 These systems have their own conventions established for how CAD software should be laid out again based on long traditions of pre computerized design conventions going back. My interaction with the software also hinges on my human perceptual ability to understand the concept of simulation. The software is also developed to create objects that correspond to fabrication methods and other infrastructure for making, existing in the world outside of the three-dimensional simulation.
Generative tools appear to me to inhabit a different ontological category than traditional CAD. I believe this to be caused by my conception of those tools as being created by people in their entirety. Generative design by its name and method seems more "alive," and because its working method is hidden from me(since it develops by itself), I feel it as being different from other technology I use to realize figures.
When designing in CAD, one typically specifies the start and endpoint of lines creating a two-dimensional geometrical shape. These are then extruded into three dimensions. The space between the start points is filled up by the software using various algorithms depending on line characteristics, such as whether it is curved. The extrusion of a simulated two-dimensional shape into three-dimensional virtual space is generated by specifying distances that the software then seemingly fills. Of course, there is no material from the CAD software's point of view, not even simulated. Any three-dimensional object consists only of minimal mathematical descriptions of geometrical shapes and their extents. There is no material between those extents for the CAD software because there is nothing between those extents, neither material nor non-material. The distance between what the virtual object is to the software and for the human user seeing it on a screen is vast. It only becomes a shape in the encounter with the human sensorium and perception able to categorize it as such.
The shape I perceive on the screen as a two-dimensional representation of an object minimally simulated in 3d space is generated by computer algorithms, just as the designs appearing after running a generative algorithm is. The software fulfils a minimal set of requirements for the human conception to interpret it as a two-dimensional representation a simulation of a solid object. Most industrial design is created using technology like CAD. That means that the virtual space and the laws it adheres to are instrumental in shaping the physical objects we surround ourselves with and interact intimately with daily. This familiarity again informs the choices and preferences of forms I have when interacting with CAD software.
In its conventional use, the CAD software draws up shapes in virtual space using algorithmic processes adhering to boundaries I define. The generative algorithms also draw up shapes in virtual space using algorithmic functions adhering to boundaries I define. Described in that way, there seems to be no significant difference between the two methods for creating shapes. But for me, as the "creator," it is experienced as different. One makes me question my agency as a maker, and the other does not. Why?