Richard Bright: Can we begin by you saying something about your background?
Daniel Ambrosi: I’ve always been a very visually focused individual with a strong aptitude in math and science. This led me to pursue a degree in architecture at Cornell University. In my second year at Cornell, I learned of the pioneering research that was taking place in Cornell’s Program of Computer Graphics under the leadership of Dr. Donald Greenberg, and was instantly hooked. Don became my mentor and invited me to stay on in his lab after completion of my architecture degree to earn a Master’s in 3D Graphics. Upon leaving Cornell, I joined NBBJ, one of the largest architecture firms in the U.S., where I primarily served as CAD Manager for close to 10 years helping veteran architects make the transition from pencils to computers. Increasingly during that period, I found myself drawn to Silicon Valley and in the mid-90’s made a career transition to tech marketing at Silicon Graphics, an early provider of computer graphics workstations. A common thread through both careers has been the opportunity to develop my visual communications skills which, given the fact that our visual system provides the greatest bandwidth to the brain, I see as critically important to sharing our stories.
RB: Have there been any particular influences to your art practice?
DA: I am utterly captivated by the 400-year history of landscape painting in the Western world. The arc of this art genre is fascinating to me for two main reasons: the first is that, as an avid hiker and skier, I am an ardent lover and tireless seeker of beautiful vistas; the second is that I’m deeply intrigued by those rare images capable of communicating the power of such special places. Why does this sometimes work, but often not? What did Claude Lorrain, John Constable, Thomas Cole, J.M.W. Turner, Claude Monet, Thomas Moran, Maxfield Parrish, Eyvind Earle, and David Hockney know that others didn’t? And for those of us who don’t paint, why does traditional photography so often fail to capture the grandeur and impact of great places we visit? This has been the inquiry that has driven my own visual experimentations and art practice for decades now, and the aforementioned are but some of the masters who have shed light on my path.
RB: What is the underlying focus and vocabulary of your work?
DA: The underlying focus of my work is to reverse engineer the psychology behind the human experience of special places. What I mean by “special places” are precise locations in our world where something very powerful happens; namely, a reaction that goes beyond the visual to also encompass a visceral and cognitive response. Have you ever noticed that in certain places, at certain times, the scene before you goes beyond a mere sight and becomes something you feel in your body? Whenever I find myself in the midst of such scenes, it stops me dead in my tracks and I feel it right in the middle of my chest, usually because it makes me gasp. And when a scene is powerful enough to make that happen, I then find myself waxing philosophical, wondering for example how other creatures would perceive the scene or what I would feel if my vision was many times sharper. What’s real anyway?
My vocabulary is dominated by those technologies, techniques, compositions, and features that I find most effectively deliver fidelity, intricacy, vibrancy, and immersion. I don’t want viewers to merely see my images; I want my images to take their breath away, and inspire them to think. I want to trigger a sense of place that is beyond real and is wonderful, like an intensely vivid shimmering dream.
RB: When and how did you begin using computers and artificial intelligence to make art?
DA: I’ve been using computers to make art since my days in the computer graphics lab at Cornell back in the early 1980’s. My opportunity to use artificial intelligence in my art in a serious way began in the wee hours of January 28, 2016, when my engineering collaborators, Joseph Smarr (Google) and Chris Lamb (NVIDIA), finally handed off to me a customized version of Google’s open source software, DeepDream, that they modified to handle my giant panoramic images. This took them about 6 months of intermittent hacking on nights and weekends. I’m eternally grateful to these two brilliant young engineers who were crazy enough to honor my request for help and too stubborn to give up once they got started. DeepDream as released was not much more than demo software; it took a lot of work to keep it from crashing on my multi-hundred megapixel images. But they succeeded and it still works beautifully to this day.
RB: Why is working with computational photography important to you?
DA: Computational photography in general, and artificial intelligence in particular, enables me to realize a vision, accomplish an objective, and execute works that simply would not be possible without these tools. Artists have always worked hard to master their tools, and you can see their individual and collective progress with that adeptness throughout and across their careers. This is still the case in the digital realm; one must learn to master their software tools. But it’s fundamentally different when working with AI because in some sense it really does have a mind of its own. When my AI sets to work on my images, it performs cognitive processing not unlike our own and reflects back to me its interpretation. I then have the opportunity to tweak that interpretation toward one that I find more pleasing or that more closely achieves my desired outcome, but the details are not in my hands. This is truly a new development in art and is the reason some are saying AI is the art movement for the 21st century; never before has an artist been able to collaborate with their tools to this extent. I believe that what I’m doing in applying AI thoughtfully to works firmly rooted in the tradition of landscape art, advances this tradition in ways that are both novel and relevant to our time.
RB: Can you say something about your working methods?
DA: My work starts with a camera, the first and perhaps simplest step in my process. I collect overlapping views of a scene both vertically and horizontally, and for each view I shoot multiple exposures from dark to light. I then use 3 different commercial software packages to stitch and blend this cubic array of images into a single immersive, vibrant, extremely high-resolution scene. I call this process XYZ photography, but technically I am producing multi-row high dynamic range panoramas. In any given day of chasing vistas, I typically only attempt capturing a handful of scenes at most. My technique works consistently well technically, but only works well aesthetically with certain scenes. At this point, I’ve had enough practice to know when a scene won’t work aesthetically, but I’m still unable to tell when a scene will work. The failure rate is high and I’m lucky if I can capture even one compelling scene in a full day of hiking.
Some small percentage of successful landscape scenes that I capture I then choose to turn into Dreamscapes (my AI-augmented images). DeepDream provides me access to 84 layers in its neural network, each of which produces a distinctly different style of hallucination. I’ve exhaustively catalogued each of these styles and carefully studied the effects of each of the four parameters that control the scale and intensity of these hallucinations. I let the content of the scene guide my choices as I attempt to find a “dreaming” style that I feel will be compatible with my source imagery. Because Joseph and Chris staged our AI on a monster compute server in the Amazon EC2 cloud with 4 graphics processing units, I get to experiment on 4 dreaming tests simultaneously, at least at low resolution. This allows me to rapidly converge on a combination of parameter settings that work best, in my opinion, for the given scene.
When I find a promising mix, I’ll commit to the full resolution process. This typically takes anywhere from 6-20 hours depending on the size of my image. Chris Lamb computed that a typical 10-hour job probably performs around 150 quadrillion floating-point math operations just to manipulate one scene.
RB: What do you aim to communicate to your audience through your art?
DA: We live in troubled times and it’s easy to get lost in the news cycle and fall into despair. At the same time, there is still a lot of beauty in this world, and a lot worth saving. There are also plenty of people working hard to make things better and promising technological developments that can help us. It is abundantly clear that artificial intelligence and deep learning algorithms will be increasingly important to the continued progress of scientists, engineers, and researchers. Less obvious to some observers is that these tools are radically advancing purely artistic and creative endeavors as well. Computational photography and artificial intelligence have given me the tools to put more beauty, joy, hope, and wonder into the world. With my AI-augmented Dreamscapes, my aspiration is to share with others my experience of special places. I want to project their power and beauty through your eyes in such a way that you feel it deeply in your body, and it engages your mind. My art aims to uplift and inspire.
RB: How important is the sense of scale in your Dreamscape series?
DA: Scale is important in both an absolute and relative sense in my Dreamscape series. In absolute terms, the grand format dimensions of my printed works enable viewers to become immersed in the scene and feel as though they can enter the environment. This visceral effect is enhanced by my choice to present these works with internal illumination.
In a relative sense, the large scale of the overall scenes contrast dramatically with the small scale of the hallucinations, which typically aren’t noticed until the viewer gets quite close. This invariably provokes quite the cognitive reaction as the surprise element forces the viewer to question what they are seeing. Interestingly, the distance at which the viewer can see the hallucinations is typically about twice as far back from the canvas once they know they are there.
RB: A lot of the processes behind creative thinking are still unknown. Do you think AI has a big role to play here in helping the understanding about our own creative methodology?
DA: Absolutely. I believe we’re already seeing that happen to some degree. I’ve heard that the gameplay of some of the world’s leading Go masters has improved after being beaten by Google’s AI program, AlphaGo. Apparently some of the highly unusual but ultimately successful moves made by AlphaGo opened their minds to new insights and intuitions about this ancient game.
In my own case, my extensive exposure to DeepDream’s way of seeing my landscape images has caused me to see actual landscapes differently at times, especially in certain lighting conditions. Perhaps this is more a case of creative seeing rather than creative thinking.
RB: What future projects are you currently working on?
DA: I just got back from two weeks of shooting in the Swiss Alps where I had the good fortune of a continuous run of incredibly good weather conditions. I’m currently working through the scenes I captured, some of which I will turn into Dreamscapes.
Further out, I’m keenly interested in alternative modes of presentation of my work. I’m extremely impressed by what Bruno Monnier, president of Culturespaces, and his team are doing at L’Atelier des Lumières in Paris. I’ve only seen videos, but the Gustav Klimt exhibition they are hosting in a renovated warehouse in Paris using immersive digital projection and advanced sound systems looks thrilling! Apparently they have a smaller room given over to emerging artists exploring AI and digital installations. I think this would be an ideal way to show my work given its intricacy, color, and depth. I hope to have the opportunity to stage a Dreamscapes exhibition in this manner some day. I think it would blow people’s minds.
All images copyright and courtesy of Daniel Ambrosi
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.