Europe Caroline Hubbard Europe Caroline Hubbard

Uniting Europe: How Closing the Digital Divide Between Eastern and Western Europe Will Strengthen the EU

Managing Editor Caroline Hubbard analyzes the digital divide between Eastern and Western Europe while proposing solutions for digital innovation in the East.


Putin’s invasion of Ukraine destabilized the entire international world order by bringing war back to Europe, but more importantly it has revealed the greater need for stability and unity between Western and Eastern Europe. One method to counteract Putin’s threats and to improve the international standing of the European Union is to close the digital divide between Eastern and Western Europe, thus working to unite the continent, bring technological innovation to regions previously untouched by it, and promoting EU initiatives and popularity. The OECD defines the digital divide as “the gap between individuals, households, businesses and geographic areas at different socioeconomic levels with regard both to their opportunities to access information and communication technologies (ICTs) and to their use of the Internet for a wide variety of activities.” The technological gap also reflects broader socio-economic issues of the impact of Communism. 

A Geographical Digital Divide 

The history of the digital divide lies in the legacy of the Cold War, a difference in economies, and the devastating impact of the COVID-19 Pandemic. Technology innovation has defined much of the European Union in the twentieth century. The member states involved have sought to digitize their economies and industries, while also setting the world wide standard for regulations regarding data and privacy. Yet, Eastern European countries, both in and out of the European Union have largely failed to adopt the same technological success of countries such as Germany and Finland. 

The root of this issue is an economic one. Eastern European countries tend to be poorer than Western Europe and thus have less financial resources to spend on investing in new technological projects or working to adapt to modern tech innovation. The Cold War deeply impacted Eastern Europe’s ability to adapt to technology. Although the internet boom occurred after the fall of the Berlin Wall in 1989, the countries under the Iron Curtain had already been cut off for decades from Western modernity. Despite the Soviet Union heavily promoting science and technology during its reign of power, the eventual weakening of their economy and the larger socio-economic issues of the late eighties prevented the Soviet Union from maintaining their high standards of technological innovation. When integration and trade between East and West finally started, the East was forced to exist in a state of perpetual “catch up” compared with their Western peers. 

The European Union has welcomed more and more former Soviet countries into its membership. In 2004 the largest enlargement took place, in which the EU added Czech Republic, Estonia, Cyprus, Latvia, Lithuania, Hungary, Malta, Poland, Slovakia and Slovenia. Since 2004 many Eastern European countries now play a role in the EU, but according to a report from the World Bank, they lack “​​the composition of spending across innovation activities and the allocation across the different types of technologies.” While the EU has attempted to spread its technological incentives throughout all states, the fact remains that some member states are better at adapting and implementing new technology given their stronger economic stability or prior interest in technological advancement. 

The COVID-19 Pandemic both worsened the digital divide but also highlighted the need for change. WIth in-person connection no longer a possibility, companies and economies were forced to adapt to a more digitized world, in which many firms moved entirely online. Member states such as Germany proved to adapt more easily to the digitalization required by the conditions of the pandemic and even thrive under it. During the pandemic, the city of Berlin developed the Digital Skills Map (DSM) to promote the sharing of ideas and encourage  “pan-EU dialogue around how digital developments are transforming the labor market. It also seeks to showcase the many effective interventions designed to boost digital skills, while giving a local voice to the EU debate around the future of work at the same time.” The success of Berlin and other cities across EU member states proves that there are benefits to the digital shift caused by the pandemic: businesses will no longer struggle to conduct work from peripheral regions, and both consumers and businesses have a better knowledge and understanding of digital tools. 

In contrast to Germany’s tech success story during the pandemic, a report from OECD revealed the devastating nature of the digital infrastructure challenges in the Western Balkans. The biggest issues in this region consisted of the low digitalization of households and the limited number of enterprises that were able to employ teleworking. The inability to shift to teleworking and digital work processes meant that businesses were far likelier to experience labor shortages caused by movement restrictions. Now that the pandemic has exposed the digital divide and the need for change, the European Union can actively begin improving digitalization within their Eastern European member states. 

Role of the EU 

The World Bank’s report on the digital dilemma in Europe reveals that there are three key goals for Europe’s digital future: “competitiveness, market inclusion of small and young firms, and geographic cohesion.” The report explains that for the European Union to achieve these goals they must better invest in the three types of digital technology, which are transactional, informational, and operational. Taking this information into account, the EU must now help member states including Bulgaria, Croatia, Poland, and Romania, to properly invest in technology creation and adoption. The report also details the distinction between the three most prominent digital technologies: transactional, informational, and operational. According to the World Bank, transactional technologies, mostly e-commerce related, are the only ones truly capable of achieving the European Union’s goals, due to their ability to bring together all forms of the digital sector. 

Bridging the divide between rural and urban areas is key to promoting technological development. Romania’s cities, such as Bucharest, have much higher rates of transactional technology initiatives compared with more rural areas where digitalization barely plays a role in local firms. Specifically targeting rural regions will also benefit the member state as a whole, as it will allow greater investment and collaboration between regions. 

The European Union should also work to promote telecommunication policies (policy concerned with the economic regulation of interstate as well as international communication, across the broader region). One way for Eastern European countries to improve digitalization is by driving competition through tech creation, but to do this they need to establish an institutional and legal environment that is ideal for tech development and can guarantee them the support of both public and private investors. Therefore promoting telecommunications policies is the quickest and most effective way to establish stability and legitimacy, thus drawing in external support. Ideally, states such as Poland and Bulgaria would create a telecommunications market with lower costs, greater competition, and a more diverse array of services provided. 

The Success of Estonia 

Despite many Eastern European member states being decades behind in regards to their Western peers, one nation stands out as an anomaly and example of the success of digitalization. Estonia, a former Soviet republic, has achieved the unthinkable. The nation state has achieved unprecedented digital success thanks to a variety of factors, and serves as a model for all other European Union member states.

The origins of Estonia’s digital success can be traced back to the early nineties when a group of amateur politicians developed a public digital architecture that specifically targeted IT. The goal was to promote IT as a public skill that would improve socio-economic skills nationwide. Estonia built up their digital network through the creation of small networks with dedicated government workers and support from the private sector. The collaboration between both public and private sector proved tremendously in creating a digital state which collaborated effectively. Since all sectors were being digitized at the same time, they were able to rely on each other for support and collaboration, such as the simultaneous development of cybersecurity alongside the online banking sector. Much of Estonia’s success can be attributed to its young politicians who possessed the energy and drive to completely rebuild Estonia, the close networks already in place, and their decision to digitize right as the internet was entering the mainstream world. However, there are still aspects of Estonia’s success story that other countries can copy. 

Estonia focused on convincing their citizens of the benefits of digitalization early on by creating digitization projects specifically designed to make their citizens' lives easier; this helped to convince skeptics and united the population. The digital Estonian ID card was launched in 2002 with a digital signature in place to allow citizens to make legally-binding decisions remotely and use their digital signature to easily sign documents. When asked about his country’s success, Chief Information Officer of Estonia, Siim Sikkut, stated that “ Digital leadership needs to be continuous across different administrations. This also involves a deeper understanding of the need to educate not just the wider society, but also government officials behind the transformation.” He also stressed the importance of creating a streamlined and efficient system: “one of the most important factors that helped streamline the government structures, authorities and databases is  the once-only-principle which exists to this day. This means that any type of data related to an individual can only be collected by one specific institution, thereby eliminating duplicate data and bureaucracy.” Studying the principle factors behind Estonia’s success reveals that other Eastern European countries must first focus on creating transactional technologies that better their citizens' lives through transparent, cooperative, and efficient digital systems. 

The EU’s Future in Eastern Europe

Closing the digital divide between East and West also begs the question: What would a digitally united and equal European Union look like? There are a multitude of ways in which digital cooperation would improve the EU’s status both on the continent and internationally. The end of the digital divide would help unite EU member states and promote the overall stability and success of the European Union. It would ease the burden felt by states such as Germany, Finland, and Estonia, who currently possess strong digitized systems, and allow then to confidently invest in the CEE countries (Bulgaria, Croatia, the Czech Republic, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, and Slovenia). The CEE countries do not possess the same economic power and stability of the ‘Big Four (France, Germany, Italy, and Spain) who do not need to rely on digital innovation efforts to promote their economies and international investment. However, greater digital innovation would most certainly draw in international investment which would strengthen Eastern European member states and the EU by extension. 

The  European Union is considered by many to be the leader in data privacy regulation. The institution has set precedents through its legislation that have created global benchmarks through privacy regulation. Despite angering many American tech companies through their strict enforcement of data protection legislation, the EU has remained firm even in the face of outlash from Google over the Digital Markets Act which prevents Google and Apple from collecting data from different services to offer targeted ads without users’ consent.  By demanding data protection of their member states and the outside world, the EU has been able to shape the global standard through its creation of the General Data Protection Regulation (GDPR) which has become the de facto global network. However, implementing greater digitization efforts in Eastern Europe would also provide the EU to ensure that their data protection regulations are more deeply ingrained throughout the continent and provide more opportunities to demonstrate the norm of implementing data privacy regulations in states with newly developing technologies. 

The past decade has severely weakened the European Union. Brexit, a damaged relationship with the US, China’s growing desire for tech domination, and now Russia’s invasion of Ukraine has damaged the EU’s internal and external reputation as a strong and powerful institution, but by solving the digital divide the EU would show the world the strength of their initiatives and their dedication to improving access to technology for citizens across all member states. Closing the digital divide does more than benefit the CEE countries, it also allows the EU the chance to redefine itself in the face of Russian aggression, Chinese domination, and American tech companies' anger over data privacy regulations.

Read More
Pragya Jain Pragya Jain

The Political Implications of Neuralink: A Thought Experiment

Staff Writer Pragya Jain examines the political implications of Neuralink, a nascent technology for human and robot symbiosis

In a press conference containing socially-distanced and bewildered engineers, Elon Musk delivered the latest advancements in brain-machine interfaces while extrapolating the possibilities of this nascent technology for human and robot symbiosis. Musk demonstrated how his start-up company, Neuralink, has the potential to solve a wide gamut of brain disabilities from blindness to paraplegia, by recording and displaying the brain signals of test-pig one of three: Gertrude. The current application of Neuralink seems to be a far cry from the wild assertions of its future capabilities. Yet, history has documented several technological breakthroughs that have redefined our understanding of what is possible and proven how quickly these advancements can be made; from Alan Turing’s Robinson machine to the dominating presence of the Internet, the quick movement of technology suggests that the future applications of Neuralink asserted by Musk must be considered. If brain-machine technology bears the chance of reaching AI and human symbiosis, the regulation and national security threats associated with this technology must be re-evaluated sooner than expected.

Technology, at its core, pushes no singular objective but simply reflects the motivations of those who are able to utilize it; when we see technological advancements wreak havoc on vulnerable populations and, just as frequently, shed light on social injustices, the varying results are a consequence of the existence of a diverse set of global actors. With the rise of revisionist powers that challenge the supremacy of the Liberal International Order, and when considering some of the most extraordinary promises of brain-machine interface tech like memory recording, telepathy, and evolution towards AI and human symbiosis, it is imperative to postulate how different state actors are likely to utilize these features to advance conflicting agendas. To convey this divide, imagine the different applications of this emerging technology, which stands the chance of bringing humans and AI into closer alignment with each other, in different political systems. 

As the name suggests, brain-machine interfaces such as Neuralink are developed with the intention to join the human brain with a robotic one — an advancement that will allow the human brain to function more like a computer. With this technology, the data produced through our synapses firing could be collected, stored, and uploaded to an external hard-drive for retrieval. Humans may discover ways to download and process new information at rates that would be virtually incomprehensible to humans who live without a chip. However, unlike a computer, human intentions will still exist, and in states existing under authoritarian rule, technology like this could have severe ramifications on individual liberties and current social structures. Consider, for example, how the Chinese government has used the internet and facial recognition technology to limit citizens’ access to free speech and control their actions. To regulate and monitor internet traffic, the Chinese government has set up state agencies dedicated to censoring information that diverts from their established political agenda. Since the advent of the internet in China, there has been contentious debate on how to regulate the internet to suppress potential political uprisings, and with the Great Firewall initiative and its subsequent extension, Great Cannon, the Chinese government has absolute censorship control over their media and can edit what remains on the internet at a moments notice. The terrifying concept of “sovereign-internet” extends to Russia and North Korea, and it exposes a worrying trend within authoritarian regimes where technology is used as a weapon for oppression rather than a tool for freedom. In the near future, it seems likely that brain-machine interfaces may be used by these authoritarian governments to tighten their surveillance of political dissent by providing them with a mechanism for tracking it.

A general distinction between authoritarian regimes and democracies is the distribution of political power in each; a centralized authority figure is characteristic of the former while the latter is created on the principle of rule by the majority. Brain-machine interfaces can easily be used to the advantage of authoritarian governments to push back more effectively against political upheaval and strengthen their hold on power. Compared to the application of facial-recognition technology in China, where surveillance cameras are heavily used to track citizens and deter opposition, brain-machine interfaces offer a more direct approach to achieve the same agenda of control and one that is not reliant on external factors like good weather. If the production and distribution of these technologies are also controlled by the central authority, it will only further the oppressive agendas of these governments and remove the agency of the people to revolt. A dystopian future may arise in the worst-case scenario of brain-machine interfaces where citizens’ beliefs are molded to mirror the political agendas of individual states and a void is created to replace creative thought and individuality.  

In modern-day democracies, ideals of personal freedom and individuality are used to convey the message of equality, and although more progressive than authoritarian governments, the current institutions of democratic nations are no better equipped to deal with implementing brain-machine interface technologies. As the complete antithesis to the central-power government structure, democracies contain an overflow of distinct moral values and political alignments which, while a conducive environment for constant growth and critical thinking, the sheer volume of opinions and information in democracies contributes to inefficiencies in political change and the rise of interest groups. This notion is evident in countries with “first-past-the-post” electoral systems that elect representatives who receive the most number of votes rather than the majority of votes and encourage pandering to small interest groups. In doing so, the voices of the few are given a larger platform than the needs of the majority and greater political polarization develops. The advent of brain-machine interfaces will only worsen this polarization as social cleavages are exposed between groups who are firm supporters versus those who are adamantly opposed to its implementation. 

The greatest achievement of the internet is its equalizing effect on access to knowledge and new information. However, as the number of individual participants continues to grow and barriers to entry are lowered, more misinformation and unfounded conspiracy theories are bound to spread rapidly. Even more concerning is how the internet can be used as a medium for states to influence the results of other democratic elections, a feat that was achieved by Russia during the U.S 2016 presidential election. If current cybersecurity attacks are difficult to identify and dismantle, the rise of brain-machine interfaces will only exacerbate the issue. The accelerated rate at which data will be created and consumed will have terrible real-world consequences as people are more likely to view and retain misinformation. 

In perhaps the only similarity to authoritarian governments, democratic states also engage in data collection on their citizens. However, the purpose of this monitoring is quite distinct in countries like the United States which use it to fight against terrorism rather than suppress political dissidence. However, this extension of power raises questions on infringement on privacy and civil liberties which led to an intense backlash against the U.S. government when Edward Snowden exposed the NSA for recording and storing large amounts of data on American citizens. Beyond the morally questionable nature of this behavior, the centralized storage of this data could pose great national security risks if any information leak occurred. Liberal democracies are therefore equally unprepared for the consequences of brain-machine interface technologies, and there is a desperate need for international cooperation to deter the threats of it.  

As humans become more reliant on technology and more interconnected with each other, it is likely that today’s technological woes will only be amplified by tomorrow’s revolutionary discoveries. This thought experiment hopes to demonstrate that all political systems — authoritarian or democratic — will not be able to implement technologies like Neuralink without some major breakdown along the chain of command. It is imperative that global actors come together to set agreed-upon norms, aid in the creation of an international organization with the goal of monitoring R&D in individual states and weeding out harmful actors, or simply expand existing platforms. Although the rise of nationalism across the world has had negative effects on international cooperation and openness, the existence of successful regulation on cyberspace in international law is a prime example of how global unity can uphold equitable applications of emerging technologies. Additionally, it is absolutely necessary that this conversation extends beyond state actors to private firms, the scientific community, and most importantly to average citizens, whose lives will be affected the most without proper regulation. Elon Musk’s demonstration with Neuralink solidified the notion that brain-machine interfaces are an inevitable advancement that will uncover the flaws of today’s technological governance and generate new threats as well as opportunities. The only thing left uncertain is whether or not global leaders will have the foresight and international framework to implement it correctly. 











Read More
Reed Weiler Reed Weiler

Playing God: An Evaluation of the Ethicality of Gene Editing Technology

Staff Writer Reed Weiler delves into the complex debate regarding gene editing and the implications of such technologies.

One of the most exciting things about scientific research is the way in which it allows us to transcend current understandings of the human potential. Genetic engineering exemplifies this power, as it often involves questions of what a human ought to be. Put simply, genetic engineering is “the direct manipulation of DNA to alter an organism’s characteristics in a particular way.” This practice, while holding a large place in modern-day discussions of bioethics, has a long history rooted in scientific exploration. Beginning during the early 1950s with the discovery of the double helix DNA structure by Rosalind Franklin, James Watson, and Francis Crick, the study of genetics became a recognized topic within scientific circles. Shortly after, geneticist Arthur Kornberg performed the first DNA synthesis, marking the official birth of genetic engineering as we know it. It wasn’t until the seventies that the field really began to take off, with the introduction of new technologies, like gene splicing and DNA mapping techniques. These innovative methods served as the early foundation for modern genetic engineering practices, as they allowed for unprecedented ease of manipulation of genomes. In the succeeding decades, substantial innovations were made in vaccines and synthetic drugs, as well as an investigation into the prospect of cloning and genetic modifications for organic plant life. With the turn of the century, the genetics community began to center their focus on the study of the human genome. It wasn’t long after this transition that renowned biochemists Jennifer Doudna and Emmanuelle Charpentier developed the first CRISPR mechanism, a tool that would offer unprecedented potential in a variety of fields yet pose an equally momentous challenge to the ethical foundations of modern science. 

CRISPR-CAS9, more commonly referred to simply as CRISPR, is a biochemical technology that allows for cheap and easy gene editing. Utilizing what is known as “clustered regularly interspaced short palindromic repeats,” CRISPR gives scientists the ability to combine endonucleases or proteins that cleave DNA at specific sites along the molecule, to pieces of RNA to add, delete, or turn off a certain sequence in the DNA. This means that, theoretically, biologists could harness CRISPR to make fine-tuned changes to an individual’s genetic makeup, whether it be curing a formerly incurable inherited disease, or enhancing certain traits, such as strength or intelligence. The most recent use of this technology among the scientific community was by a Chinese scientist named He Jiankui. On the last Monday of November 2018, Jiankui announced that he had created what would instantly become known as “CRISPR babies,” by editing the genomes of two unborn twins and altering their DNA sequences using CRISPR. The goal of this experimentation was to use CRISPR to delete a genetic mutation from the twins’ genomes that has been known to increase the probability of contracting HIV. The news of Jiankui’s work was met with overwhelming condemnation from the scientific community and society writ large. Experts called his work “sloppy” and its application “unnecessary,” citing a failure of Jiankui to meet germline editing requirements, a set of ethical and moral guidelines established by the 2017 National Academies of Science Report. However, the ultimate criticism of Jiankui’s attempt to edit the human genome stemmed from accusations of a lack of moral conscience. While Jiankui can be seen as noble in his attempt to remove the gene that causes HIV from the twins, the effect of his tampering was an overall decrease in life expectancy, something that was neither foreseen nor acknowledged by Jiankui himself. In light of the uproar over Jiankui’s perceived callousness, a host of important questions have arisen regarding both the feasibility and moral efficacy of gene editing as a practice. 

Despite the outcry over Jiankui’s irresponsible use of CRISPR technology, the larger conversation regarding the benefits and drawbacks of genetic engineering, gene-editing technology in particular, demands an analysis that accounts for the industry’s multitude of different potential applications. Many ardent supporters of gene-editing technologies argue that technologies/innovations like CRISPR could cure medical conditions and ailments that substantially reduce the quality of life, offering an enormous boost to modern medicine’s ability to address genetic diseases that result from mutations, like cancer or HIV. Some experts even assert such technology has the potential to cure sickle-cell anemia, a genetic blood disorder that--until now--has been believed to be incurable. Additionally, gene-editing technology bears another, potentially less intuitive application: agriculture. Since CRISPR allows precise and efficient modification of genomes, scientists could use it to improve certain agricultural traits, like crop resiliency, adaptation, and end-usage. Since agriculture is crucial to the advancement of the United States (U.S.) economy, it’s essential that we use the technology available to us to improve farming practices however we can. 

In light of these potential advancements, it may seem difficult to understand why so many within the scientific community were so starkly opposed to Jiankui's application of CRISPR technology to human embryos. After all, he was trying to help them, albeit in an irresponsible and unnecessarily hasty way. The question of how we ought to judge Jiankui’s use of gene editing practices begs inquiry into how capable we, as a society, are in controlling the potential negative side-effects of such a technology. Framed in this manner, it becomes less a question of the intrinsic features of the technology, and more so one of technical feasibility. Tied up in the debate over the feasibility of gene-editing technology are philosophical questions of future value since any use of CRISPR runs the risk of harming future generations in unknown ways. In this sense, it becomes clear why an expert consensus was opposed to Jiankui’s use of CRISPR: although his intentions may have been good, the state of the technology was such that Jiankui was incapable of preventing the negative side effects of meddling with the human genome. 

Independent of determining whether or not the technology is ready for responsible use, some, such as theology and ethics professor Ted Peters, argue that gene-editing practices are unethical since they create the risk of using the technology for eugenics. As is evidenced by the current state of CRISPR technology, gene-editing tools are not widely accessible to the public yet. Since the development of such advanced technologies is expensive, tools like CRISPR are likely to become available to those who are higher on the socio-economic ladder first. This poses a massive threat to the fairness of society; since wealthy people will be the first customers of gene editing, genetically enhanced people can form a distinct socioeconomic class, raising the specter of inequality. According to Peters, the difference between a world of ethical gene editing and the much darker, socially stratified scenario laid out above lies in the distinction between therapy and enhancement. The therapy model for understanding gene-editing is one that emphasizes the potential for technologies like CRISPR to materially improve the lives of those suffering from genetic diseases. The enhancement model, on the other hand, sees CRISPR as a means to raise the standard for human genetic makeup in hopes of elevating the status of the human race as a whole, often resulting in class-based stratification of human worth. The line between these two interpretations is admittedly blurry, and the difference between therapy and enhancement is a hard line for bioethicists to draw. As a result, it’s likely that a world with CRISPR would be one in which certain castes of society are seen as more fit to reproduce than others, justifying mass oppression of those at a lower genetic “tier.” 

How can one reconcile the relative advantages and disadvantages of using something so transformative as gene-editing technology? First, it’s important to distinguish between various degrees of relevance when considering the drawbacks of CRISPR. The uncertainty tied to the readiness of these technologies is by no means intrinsic to its use since our ability to effectively operate on the human genome is purely a matter of empirical circumstance. The potential for serving as the foundation of a modern eugenics’ movement, however, is not something that can be separated from the moral quality of gene-editing technology. This concern, while merely hypothetical, is inherent to the technology itself. In fact, it’s undergirded the entire field of genetic engineering since its conception in the 1950s. 

Given that CRISPR poses an imminent threat to the equity of our society, what should we do about its proliferation? Maybe He Jiankui’s lapse in moral judgment was, in retrospect, a good thing. After all, his error did, at the very least, make clear to the scientific community and the world how unprepared we are as a society for things like CRISPR, potentially delaying widespread acceptance of the technology. No matter the answer to this question, gene-editing is undeniably unethical due to its propensity to increase inequality within a society, and thus anyone who willingly pushes the development of the technology would be making a mistake. While those who parade the potential benefits of CRISPR do so out of good intentions, their advocacy is dangerously short-sighted and could contribute to a path of development that is neither safe nor equitable.

Read More
Anna Janson Anna Janson

Huawei and 5G in the Tech Cold War: China vs the United States

Staff Writer Anna Janson outlines the tumultuous history of Huawei and its relationship with the United States.

Although the White House publicly stated they oppose a government-lead deployment of a national 5G network, they reportedly considered it for over a year. The gravity of this idea —a federal takeover— sheds light on the current situation: China appears to be ahead of Silicon Valley on this technology. 5G is “a type of wireless networking infrastructure designed for fast connectivity of self-driving cars, virtual reality, the internet-of-things, and other technologies that are emerging after the smartphone-centric 4G era we’re currently in.” This means that whichever country leads 5G will dominate the technology sector and benefit from the economic and national security advantages that come with it. The strongest supplier of 5G technology is Huawei, a Chinese telecommunications company which has been doused in controversy since its beginning, and one that should be an important part of trade talks between the United States (U.S.) and China.

In the 1990s, the company made its first big deal with the People’s Liberation Army. Next, India intelligence agencies placed Huawei on a watchlist in 2001 for allegedly supplying the Taliban. When Huawei attempted to buy part of the U.S. company 3COM in 2007, the United States Congress blocked the deal due to security concerns. This was followed by the launch of an FBI investigation regarding possible violations of U.S. trade sanctions in Iran. In 2009, British spy chiefs “reportedly briefed ministers that Huawei hardware bought by BT Mobile could be hijacked by China to cripple the UK's communications infrastructure.” In security briefings by British company Vodafone, backdoors and other flaws were found in Huawei’s equipment. Huawei tried to buy the Sprint network and build a national wireless network for emergency services in the U.S., but both requests were denied by the United States government. A United States investigation surrounding Huawei and ZTE concluded in 2012 that neither company “cooperated fully with the investigation” and “the risks associated with Huawei’s and ZTE’s provision of equipment to U.S. critical infrastructure could undermine core U.S. national-security interests.” At about the same time, Australia blocked Huawei from its National Broadband Network and a Huawei CFO was linked to Skycom, a firm that offered HP equipment to Iran despite U.S. sanctions. Information leaked by Edward Snowden revealed that the U.S. National Security Agency hacked Huawei in order to spy on them through Operation Shotgiant, which was allegedly successful. In 2018, bans on certain Huawei products were implemented in several countries, including Australia, New Zealand, and Japan. Chief financial officer and the CEO’s daughter, Meng Wanzhou, was arrested by Canadian officials with plans for extradition to the United States and a court date set for September.

On the other hand, Huawei does not concede that they are a security concern or that they are undermining the United States. In an open letter to the Obama Administration, the company claimed that “the allegation of [Chinese] military ties rests on nothing but the fact that Huawei’s founder and CEO, Mr. Ren Zhengfei, once served in the People’s Liberation Army.” Additionally, Mr. Zhengfei stated in 2015, “of course we support the Chinese Communist Party and love our country. But we don’t compromise the interest of other countries. We comply with the laws of every country we operate in.” In an attempt to further show the company’s commitment to fight corruption, he cited their “confess for leniency” program: employees that came forward with violations before a certain deadline would receive leniency, while cases uncovered afterwards would be turned over to Chinese authorities. Furthermore, Huawei openly encouraged the United States to come forward with any evidence of its accusations.

This year is full of new developments. More countries banned certain Huawei products, major Silicon Valley companies cut ties, and the United States formally charged the company with thirteen crimes. These charges include violating United States sanctions on Iran but “prosecutors convinced a federal judge that releasing too much [evidence] would pose a risk to national security and other governmental concerns.” Clearly, this is bigger than just one company. The Wall Street Journal described these charges as just “the latest to accuse the Chinese government or Chinese companies of stealing intellectual property from U.S. firms through a combination of cyberattacks, traditional espionage and other means.”

Continuing down this path toward a “digital iron curtain” will negatively impact both China and the United States by limiting their access to necessary technological components, but the trade war is rapidly escalating. President Trump issued an executive order which restricted U.S. tech purchases, raised tariffs from 10 percent to 25 percent on $200 billion worth of goods, and the government added Huawei to a blacklist. President Trump stepped back by stating that American companies can once again sell to Huawei since “the companies were not exactly happy that they couldn’t sell because they had nothing to do with whatever it was potentially happening with respect to Huawei,” but it is unclear whether the U.S. will be able to purchase components from Huawei in the coming years. 

These tech tensions are shaping up to be a ‘technological cold war’ and the potential to alleviate the issues through trade talks seems rough. A Chinese government advisor said that the United States requested “enormous [amounts], even hundreds” of legislative modifications and an advisor to the Chinese State Cabinet explained that “China and the U.S. have fundamentally contradictory attitudes as to what would be a good deal.” Additionally, President Trump seems to think that China is desperate after experiencing what he claims was “the worst year they’ve had in 27 years.” However, as analyst Michael Ivanovitch notes, “nobody seemed to notice that the Chinese were laughing all the way to the bank with $137.1 billion of surpluses on their U.S. trades.” Although a trade deal could potentially benefit both countries and may be the only feasible solution, China does not think it is worth meeting the lengthy list of demands by the United States. 

On the other hand, editor-in-chief of the Chinese newspaper Global Times, Hu Xijin, mentioned that he expects in-person talks to come soon. Reuters reflected that the “Global Times is not an official mouthpiece for the Communist Party, though its views are believed to at times represent those of its leaders.” Despite conflicting expectations, perhaps the U.S. and China will be able to collaborate and end the turmoil that “upended” the markets.

The Huawei allegations prove that 5G stands at the forefront of international relations and a tech cold war between the U.S. and China is imminent if trade negotiations regarding 5G fail. Instead of considering aggressive government intervention with the private tech sector, such as nationalizing a 5G network, the United States should focus on strengthening diplomatic relations with China to make a deal that both countries can benefit from. Both sides need to be prepared to meet in the middle in order to prevent a much bigger conflict.

Read More
Reed Weiler Reed Weiler

Artificial Intelligence and Ethics: An Exploration of Machine Morality

Staff Writer Reed Weiler explains the implications of new AI technologies for government policy.

Since the Industrial Revolution, nations across the globe have brought on waves of technological innovation that have drastically altered the global economic landscape. From the steam engine to the telephone to the widespread use of electricity, technology has served a pivotal role throughout history in advancing the capabilities of the human race. Today, the world faces its next great hurdle along the path of technological progress, possibly its largest one yet; artificial intelligence. Our current socio-economic systems trend towards increased automation; by the early 2030s, roughly 38% of US jobs are expected to be at risk of automation, with more than 85% of customer interactions projected to be managed without a human. Although integration of machines into the global economy is no new concept, the notion of machine learning and consciousness presents policymakers with a host of new social, political, and ethical concerns. To better explore and address these concerns, one must first ask; what exactly is artificial intelligence, and what steps should we take to control it? This article will argue in favor of programming AI with normative philosophy in order to benefit the future of the human race, focusing on German philosopher Immanuel Kant’s theory as a starting point for AI ethics.

We have seen the use of touch-screen soda dispensers and ATMs for years, but the more modern term, “AI”, describes the study and design of intelligent agents, which is a “system that perceives its environment and takes actions which maximize its chances of success”. Otherwise referred to as computational intelligence or rationality, put simply, AI is a blanket term used to describe any form of intelligence demonstrated by a machine. The majority of existing AI technology takes the form of simple AI, or machines that rely on decision trees, or a predetermined set of rules and algorithms for success. Machine Learning, however, is distinct in that it allows machines to learn without being explicitly programmed. This type of technology allows for machines to improve their decision-making process by incorporating and analyzing swaths of data pertaining to a particular task and the success rate of certain actions. Often referred to as complex AI, deep learning machines work by picking out recognizable patterns and making decisions based on them. Thus, the more data you feed it, the smarter it becomes, and the better it works. In today’s society, we can already see the positive benefits of machine learning in many of our most innovative technologies, such as the predictive analytic machines that generate shopping recommendations or the AI used in security and antivirus applications worldwide. The drawbacks, however, are far less evident.

Despite Terminator-esque representations of a war-torn future dominated by robots, the potential dangers of the advancing field of AI research lie in the fundamental lack of control at the heart of the development of autonomous machines. For many experts in the field, the question of whether or not AI will yield beneficial or harmful results for the human race is simply the wrong one; instead, the focus falls on determining the degree of control with which we execute this line of progress. Most notably, Tesla and SpaceX CEO Elon Musk remains outspoken in his belief that AI’s development will outpace our ability to manage it in a safe way. Musk even went as far as to claim that AI development poses a greater threat to humanity than the advent of nuclear weapons, citing the machine intelligence that defeated the world champion in the ancient Chinese strategy game, “Go”. Although much of the AI that is used in the status quo has yet to cross this intelligence threshold, the increasing development of neural networks for complex AI has opened the door for an exponential uptick in the rate of machine learning. Once the cat is out of the bag, warns Musk, the intelligence in question will be unstoppable, and has the potential to wreak havoc on all of society. Autonomous drone strikes. Release of deadly chemical weapons. Violent revolution fueled by mass media propaganda campaigns. When one considers the degree to which we rely on machines for public health, global military operations, and political communications, the necessity for control over the activity of AI becomes abundantly clear. Therefore, the solution to the dangers of AI development lie in our ability as humans to control its behavior past a certain threshold of growth, through whatever means necessary.

As outlined above, policymakers have a clear incentive to avoid a scenario in which the rate of AI learning exceeds our ability to control it. However, as can be seen in the status quo, leading governments have failed to adequately regulate their respective tech industries, causing them to be caught in a game of catch-up with AI developers. As evidenced by the 2018 Congressional hearing questioning Facebook CEO Mark Zuckerberg, the government has allowed the tech industry to exceed its reach, with no major value shift or policy agenda in sight. The question now becomes, what should policymakers do to maximize the chance that an AI outbreak would yield positive consequences for society? The answer lies in the study of philosophy, or ethics. At its most basic level, philosophy, or the debate over morality, is a question of what an agent ought to do. This question is especially relevant when applied to AI; after all, an AI with an intelligence level greater than that of humans would be able to rewrite its own code, effectively making itself anything it wants to be. Yet, there is much uncertainty as to whether an AI would want to rewrite itself in a hostile form, or a peaceful one. This is where ethics comes in. If AI developers could program machines with ethics that morally prohibited them from harming humans, then the scenario in which they utilize their neural capabilities for harm becomes much less probable. In this sense, ethics comes into view as the primary method of control, potentially the last one, that humans could bear over their creations.

Next, we are tasked with determining which system of ethics would yield the best outcome for humanity in the event that AI exceeds our ability to regulate it. Before making this determination, it is important to understand some core distinctions between various branches of philosophical thought. Normative philosophy is divided into two major categories: consequentialism and deontology. Consequentialism dictates that the morality of an action be determined by looking purely to the consequences of said action, and that an ethical agent ought to seek to achieve the maximal state of affairs. Deontology, on the other hand, is a system of ethics that uses universal rules to distinguish right from wrong. A deontologist would not be concerned with the consequences of an immoral action, even if the consequences were positive, since the action is deemed immoral by its very nature. Similarly, a consequentialist would not care if an action is intrinsically immoral or violates certain rights, insofar as the action produces good consequences down the road. Put simply, consequentialists are concerned with ends, and deontologists are concerned with means. In the context of AI decision-making, this distinction could make all the difference; for example, a consequentialist AI might decide that killing one particularly evil human would amount to thousands of lives saved down the road, thus justifying the practice of murder on the part of AI for the greater societal good, despite the intrinsic wrongness of such an act. Conversely, a deontological AI would disregard the future benefit of killing the evil individual for the sake of avoiding committing a violation of that individual’s fundamental rights. A third, less prominent branch of normative philosophy, known as virtue ethics, offers an alternative to the more rule-based approaches listed above. Conceived by Aristotle, virtue ethics is an approach to normative ethics that emphasizes the virtues, or moral character of an agent, rather than the duties and consequences involved in an agent’s action. Under this theory, an AI would be considered “ethical” if it took actions that were reflective of intuitively desirable character traits, such as honesty, courage, or wisdom.

Insofar as the goal of programming AI with a system of ethics is to preserve the future wellbeing of the human race, then any attempt at formulating a normative theory upon which to program AI must center around the value of humans as moral agents. Otherwise, we run the risk of becoming an obstacle in the way of the machine’s progress. Created by enlightenment thinker Immanuel Kant, “Kantianism” is used to refer to the deontological theory derived from universal principles of human worth. This section will lay out the primary arguments for adopting a Kantian system of normative ethics for AI, by explaining the theory’s applicability to AI and various advantages this approach holds over alternatives.

First and foremost, Artificial Intelligence must be programmed with a rule-based (deontological) system of ethics, instead of the more calculative and character-based approaches of consequentialism and virtue ethics. Robotics expert Matthias Scheutz argues that the need for a “computationally explicit trackable means of decision making” requires that ethics be grounded in deontology. Since AI have the potential to make incredibly complex moral decisions, it is important that humans are able to identify the logic used in a given decision in a transparent way, so as to accurately determine the morality of the action in question. This necessitates deontology, as theories that rely on valuation of consequences or judgements of character are far more subjective and difficult to track in an ordered manner.

Furthermore, Kantianism is uniquely suitable to AI programming because of its prioritization of the self-determination and rational capacities of other moral agents. While attempting to formulate a moral theory, Kant began his inquiry by drawing a distinction between the moral status of rational agents, and non-agents. According to Kant, humans are morally distinct from other beings in their ability to use their rational capacities to set and pursue certain ends. This status would also apply to AI. Dr. Ozlem Ulgen, member of the UN group of Governmental Experts on Lethal Autonomous Weapons Systems, claims that technology may be deemed to have rational thinking capacity if it engages in a pattern of logical thinking from which it rationalizes and takes action. Although Kant’s concept is reserved for humans, the capacity aspect may be fulfilled by AI’s potential for rational thinking. Not only does this prove the suitability of Kantian ethics to AI, but it also provides built-in advantages when it comes to protecting human interests. As Kant identifies the source of moral value as individual reason, the rules that follow accordingly seek to protect that same capacity. For example, Kantian ethics prohibits harming others, as doing so would fundamentally contradict the capacity for reason within other moral agents. In this sense, a Kantian AI would be far less likely to do harm unto humans, as the core tenet of their philosophy would be tied to our shared rational capabilities. Thus, Kantian ethics provides a human-centric approach to formulating moral rules.

Lastly, the subjectivity at the heart of consequentialist and virtue ethical approaches to morality provide a comparative advantage to Kantian ethics. Consequentialist theories, on one hand, mandate that we maximize the probability of good consequences, but don’t inform us of what those consequences are. Thus, consequentialist theories are incomplete in that they leave it up to the agent in question to determine what they consider to be a “moral good”. This poses potential problems when applied to AI, as they could very well decide that the extermination of the human race is a good consequence, and thus act to achieve it. Similarly, virtue ethics relies on the notion of “good character”, or the idea that we ought to inculcate certain character traits within society. This commits the same error as consequentialists often do, as it fails to provide a comprehensive account of the “good person”, leaving room for AI to drum up their own conceptions of virtuous character in order to suit their own needs. Kantianism, however, avoids this pitfall, as it sources “the good” within the agent itself. To a Kantian, actions are good insofar as they respect the right of other moral agents to set and pursue ends, thus further helping to create a human-centric system of ethics.

In answering the question of which system of ethics would be most suitable for programming AI, a variety of other questions arose, all of which demand further investigation. Evidently, reaching a definitive conclusion on the issue of AI ethics is no easy task; after all, humans have been debating back and forth between different philosophies and modes of thought for hundreds of years, and the conversation doesn’t seem to be ending any time soon. The one thing we, as a society, can agree on despite differences in perspective is the idea that morality is fundamentally subjective. If history is any example, it is clear that the ethical systems by which people choose (or attempt) to live their lives is heavily contingent on their individual point of view. As such, the mission of determining how to program an ethical AI is problematized by the reality that we, as humans, do not operate under a perfect ethical framework to begin with. According to virtual reality developer and CEO Ambarish Mitra, this concern is easily surmountable, as he argues that AI could help us create one.  Referred to as “super morality”, many in the field of AI development believe that the potential for AI to reach a level of consciousness “beyond” that of humans gives them the potential to reach the sort of higher ethical truth for which we have been searching for so long. If moral truth is to be discovered through reflection and deliberation, machines with higher rates of learning and cognition than that of humans would have a better shot at discovering that truth than we ever will.

Read More

Recent Articles