Defend the truth: seeing is not believing anymore


Imagine if someone replaces the character in a certain video with your face image just for fun, what would you think? This is what DeepFake is doing. Its function is to combine and superimpose existing videos onto source videos using generative adversarial network (GAN) [1]. The fusion of the existing and source videos outputs a fake video that shows a person acting at an event that never happened in reality.

As we all know that every coin has two sides. On the good side, the technology in DeepFake provides more convenience for mankind to a certain degree. In the film industry, engineers are able to generate more exciting and incredible scenes while actors or actresses don’t really need to risk their lives. Moreover, DeepFake brings color to family parties or enhances friendship between companions if used properly.

There is no doubt that the development of technology especially artificial intelligence gives more power to common people. However, from my perspective, technology has no sense of morality nowadays because machine learning models have no consciousness. In other words, DeepFake executes instructions according to human wills which may result in more harm than good owing to people’s curiosity and selfishness.

As far as I am concerned, DeepFake has become a new tool threatening the era of social media. More unethical actions could happen for the reason that most people tend to use this tool to prank on others or even forge a video that has never happened for a selfish goal maliciously. For example, the lives and portrait rights of many actresses have been threatened because someone replaced the faces of porn stars with many other face images which influenced public opinions and the careers of these actresses. Even more, what if we deepfake a video about Donald Trump during the presidential election in 2020? Not surprisingly, someone has already done this for the purpose to affect people’s tendency to vote. These are no longer moral issues, these behaviors have violated the law in many countries.

Under the current background of rapid economic and social development, whether it is about humanitarianism, product launches, or campaign activities, deepfaked videos and images may reverse black and white. Not only has the social order been challenged, but it is also difficult for us to see the truth in such a social situation. Seeing is not believing anymore.

It is time for us to defend the truth for a better tomorrow. Otherwise, with the advancement of science and technology, our society would fall into the brave new world as Huxley described [5]. In this world, our lives will be overwhelmed by a flood of irrelevant or fake information and even we will amuse ourselves to death [6]. In the next part, I will give some immature but feasible suggestions in the aspects of technology, law, etc. I hope they will be of some use.

Technically, researchers are supposed to design more powerful models to detect whether a video or an image is true or fake. Fortunately, there emerges some work for the detection of image or video forgeries such as [2], [3], and [4]. Besides, the legislature should introduce more stringent measures to prevent and stop these behaviors that disrupt social stability and order at the legal level. What needs to be pointed out is that the law punishes the people who abuse DeepFake, not the DeepFake inventors because the principle I have always believed is that technology is not guilty. Otherwise, it may hurt the development of science and technology which will result in the loss of vitality of technological creation.

Of course, social media like Twitter and Facebook should take more responsibility to detect and delete fake information and help create a harmonious and true community environment. Only in this way can media literacy be enhanced to cultivate a discerning public platform.

Last but not least, it should be realized that only truth leads to liberty, democracy, and human development. Therefore, to counter the menace of DeepFake, each of us needs to improve the ability to think independently and critically rather than spreads information as we like according to our tastes and interests.


[1] Suwajanakorn, Supasorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman. “Synthesizing obama: learning lip sync from audio.” ACM Transactions on Graphics (TOG) 36.4 (2017): 1-13.
[2] Bappy, Jawadul H., et al. “Hybrid LSTM and encoder–decoder architecture for detection of image forgeries.” IEEE Transactions on Image Processing 28.7 (2019): 3286-3300.
[3] Tolosana, Ruben, et al. “Deepfakes and beyond: A survey of face manipulation and fake detection.” arXiv preprint arXiv:2001.00179 (2020).
[4] Güera, David, and Edward J. Delp. “Deepfake video detection using recurrent neural networks.” 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2018.
[5] Huxley, Aldous. Brave new world. Ernst Klett Sprachen, 2007.
[6] Postman, Neil. Amusing ourselves to death: Public discourse in the age of show business. Penguin, 2006.