Photos

Dall-E 2 image creation software will take jobs from photographers?


DALL · E 2 is a new artificial intelligence system that can generate realistic images and art from a written description. No complicated lighting and styling setup needed: you can now just put in the description what you want and the DALL-E-2 delivers the picture. Too good to be true? Too threatening for the forever successful photography industry? See for yourself with my pre-test run.

DALL-E-2 states in its mission statement:

OpenAI’s mission is to ensure that general artificial intelligence (AGI) – by which we mean highly autonomous systems that outperform humans at jobs of most economic value – benefits benefit all mankind.

If you are a photographer who feels like modern technology is constantly beating your ability to be an indispensable player in the marketing industry, then this quote is sure to make your heart ache. . I was given pre-access to the artificial general intelligence (AGI) platform and I took it for a test run. Can it really do what we can do? Could it even “surpass” us? Is it a threat to photographers? Is it a resource? Or is it a combination of the two? Let’s have a look.

There are several functions of the software. The first and also the best known is that it can create an image or artwork based on the description. For example, on their Instagram you might find results from “a blue orange cut in half on a blue floor in front of a blue wall”

Anyone can agree that the results are quite impressive. I even took a random description myself.

There’s no denying that the technology is impressive. However, my intention when testing earlier was to find out if it could work like a professional photographer. Work. A client, instead of hiring us, can enter a description of what they want and skip the cost of hiring a professional?

Test One: Is the resulting image on par with that of a professional photographer?

My first test was to see if the DALL-E 2 could produce visual content that could rival the visuals I was working on at the time. Case study one: chocolate made from cocoa and dates. I typed in the description of the image I created that morning: “a date with chocolate sauce poured over it.”

These are the results:

I suppose if you just need a photo of the date with chocolate, this can do it. However, if you have considered lighting, composition, color correction or aesthetics, these images will not meet my standards.

Next, I decided to throw a model to the test. The brand once did a scene in which a model dripped chocolate on her tongue, and it was a very successful image. Along those lines, I typed: “A beautiful woman with chocolate flecks all over her body.”

My first observation was that it looks like the artificial intelligence has chosen dark-skinned whites to portray their quintessential beauty, so I guess I got lucky! My second observation, like in the previous test, is that the aesthetic of the image is a total failure. It looks more like a scene from a Freddy movie than an advertisement selling chocolate and lust. The software impressed me in that it could magically generate images from a brief description, but it quickly became clear that it was in no way capable of generating a set of images. aesthetically successful photo.

Second test: Can correction features be an asset to a Photographer?

You may have seen DALL-E 2’s almost unbelievable results on AI-tuned fuzzy ladybugs as seen in this Technology Times article. I also decided to capture these features. My first test was to remove the drop shadow and fill it with a patterned background. I suppose I jumped right into the abyss.

After uploading my image, I chose “Edit Image” and entered “Remove the shadow of the lotion bottle and fill it with palm leaf shadow”. I was impressed with the images it produced.

It is significantly superior to Photoshop, which cannot match the brush pattern.

For the amount of criticism I’ve made so far, I really have to take my hat off to the software in that regard. Next, I tried another real scenario. One time, my salsa client asked me to exchange the red peppers pictured below for jalapeños. Needless to say, I had to go back to it. Impressed with DALLE-2’s final fix, I decided to see if it could get the job done.

“Changing red peppers into jalapeños.”

(cricket)

“T, To May!”? … and the peppers are still red.

An obvious failure in this quest.

Test 3: Can the Dall-E-2 effectively add elements to a photographer’s photo?

During my product photography, I often do a lot of splashes and falls. My final test was to see if the software could do some of that work for me. Inspired by the images I took below, I asked if it could add chips to the background.

These are the results for “Add tortilla chips to the background.”

I also asked the software to add more water rings to the image.

Here are the results for “Add a little juice to the background.”

The test above doesn’t make a splash and some interesting alternatives, like a fuzzy pineapple creeping in.

Conclusion

After putting the DALL-E-2 through countless challenges, it’s clear that this software still hasn’t met its mission of “surpassing” a professional photographer. While the software is an amazing feat, it doesn’t consistently deliver what it claims to be. If so, the aesthetics of the image are not up to par. I am truly amazed at the repair work on the brush shadow, and I wonder if it positions itself as a more advanced tool than Photoshop.

What are your thoughts on this new technology that aims to “beyond humans at the most economic value”? Share your thoughts below.





Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button