DeepNude: What image technology does DeepNude involved?
By Fields Corrielus 2019-07-10 1959 0
According to the publisher, the R & D team is a very small team, and the technology is obviously immature. Most photos (especially low-resolution photos) are processed by DeepNude, and the resulting images have artificial traces. The input cartoon character photos, the resulting image is completely distorted, most of the images and low-resolution images will produce some visual artifacts.
Of course, the target "picture" is still a variety of women, and Motherboard, the technology media that earlier exposed the app, said they had tested dozens of photos and found that if they entered photos of the Sports Illustrated Swimsuit Special (Sports Illustrated Swimsuit), The nude photos are the most realistic.
This application instantly triggered all kinds of denunciation in the community, saying it was a counterexample of the use of AI. Even Wu Enda came forward to denounce the project.
The application went offline quickly amid a crusade, but aftershocks remained. In particular, the discussion of the technology behind this application has been going on.
This week, GitHub, called "studying the techniques and papers related to Image Generation and Image repair used by DeepNude," rose to the weekly hot list, winning a number of stars. The project founder obviously has a lot of research on the technology behind this project, and puts forward a series of technical frameworks needed for its generation, as well as which technologies may have better implementation results. I would like to reprint it here. I hope that geeks can not only satisfy their technical curiosity, but also make correct use of their technical power.
Deep Computer Vision in DeepNude
Image-to-Image DemoDeepNude software mainly uses the Image-to-Image technology proposed in Image Inpainting for Irregular Holes Using Partial Convolutions, which has many other applications, such as converting black and white sketches into colorful color pictures. You can click on the link below to try Image-to-Image technology in your browser.
In the left box, draw a simple drawing of the cat according to your imagination, and then click the pix2pix button to output the cat generated by a model.
- Papers: NVIDIA 2018 paper Image Inpainting for Irregular Holes Using Partial Convolutions and Partial Convolution based Padding.
In the left interface of Image_Inpainting (NVIDIA_2018) .mp4 video, you can simply smear out the unnecessary content in the image with tools, even if the shape is very irregular, NVIDIA's model can "restore" the image. Fill in the smeared blanks with very realistic images. It can be described as a one-click P chart, and there is "no trace of ps". The study came from a team of Nvidia's Guilin Liu et al., who released a depth learning method that allows you to edit an image or reconstruct a damaged image, even if the image has a hole or lost pixels. This is the current approach to 2018 state-of-the-art.
Pix2Pix(need for paired train data)
DeepNude mainly uses this Pix2Pix technology.Image-to-Image Translation with Conditional Adversarial NetworksIt is a general solution to the problem of image-to-image conversion using conditional countermeasures network, which is proposed by the University of Berklee.
CycleGAN(without the need for paired train data)
CycleGAN uses a cyclic consistency loss function for training without the need for pairing data. In other words, it can be converted from one domain to another without the need for a one-to-one mapping between the source domain and the target domain. This opens up the possibility of performing many interesting tasks, such as photo enhancement, image coloring, style transfer, and so on. You only need source and target datasets.
FutureImage-to-Image may not be required. We can use GAN to generate images directly from random values or from text.
Obj-GAN, a new AI technology developed by Microsoft artificial Intelligence Research Institute (Microsoft Research AI), can understand natural language descriptions, sketch, synthesize images, and then refine the details according to the sketch framework and the individual words provided by the text. In other words, the network can generate images of the same scene based on a text description that describes the day-to-day scene.
Advanced version of the magic pen: as long as a word, a story, you can generate a picture.
The new Microsoft research proposes that the new GAN--ObjGAN, can generate complex scenes according to the text description. They also suggest that another GAN--StoryGAN, which can draw a story, can output a "comic strip" by typing in the text of a story.
The current optimal text-to-image generation model can generate realistic bird images based on single sentence description. However, the text-to-image generator generates much more than just a single image for a single sentence. Given a multi-sentence paragraph, generate a series of images, each image corresponds to a sentence, a complete visualization of the whole story.
The most commonly used Image-to-Image technology now should be Beauty APP, so why don't we develop a smarter beauty camera?
|You may also want to read:|
|HUAMI AMAZFIT Verge 2 Marvel edition features and price|
|AfterShokz releases Aeropex and Xtrainerz bone-conducting headphones|
|AMD Ryzen 9 3900X's overclocking performance details revealed|
● Over 300,000 products
● 20 different categories
● 15 local warehosues
● Multiple top brands
● Global payment options: Visa, MasterCard, American Express
● PayPal, Western Union and bank transfer are accepted
● Boleto Bancario via Ebanx (for Brazil)
● Unregistered air mail
● Registered air mail
● Priority line
● Expedited shipping
● 45 day money back guarantee
● 365 day free repair warranty
● 7 day Dead on Arrival guarantee (DOA)
2019-06-28By Joe Horner
2019-09-06By Joe Horner
2019-06-25By Fields Corrielus
2019-09-27By Goraud Mazanec
2019-08-12By Sigismondo Eisenhower
2019-08-19By Goraud Mazanec