On Artificial Intelligence

Alexander Harvey
6 min readFeb 18, 2019

The perpetual advancement of machine learning and Artificial Intelligence pioneered by technology companies, visionaries, and programmers around the globe raises serious ethical questions which must be explored thoroughly. Proponents of AI point to its potential to improve our day-to-day lives, enhance information processing, and path the way for further technological advancement. Nevertheless, others foresee a dystopian future characterized by the increased automation of jobs leading to massive unemployment, increased centralization of infrastructure, and even a robopocalypse. While both contentions bring relevant information to the discussion, Artificial Intelligence is not inherently good or bad for mankind, (just as a hammer is not inherently good or bad for mankind) thus, the burden of ethical implementation falls upon the human in control of the tool. The ultimate ethical dilemma of Artificial Intelligence does not hang on the ethicality of its development, but on the extension of traditional ethical questions to a far greater reaching impact.

Artificial Intelligence, machine learning, and autonomous systems have already reshaped many aspects of life. Catalyzing improvements in standards of living, day-to-day convenience, and vast enhancements in efficiency on almost every front, there is no question that these developments have already contributed to the betterment of mankind. Not only do these implementations pose to improve the lives of those in well-developed nations, their use has also been applied to aid those in less fortunate countries. AI has been utilized in less-developed nations to aid in disaster relief, agriculture, healthcare, and education. For example, during the aftermath of a major earthquake in Nepal, drones were used to assess damages, map the destruction, and streamline the rescue mission. While advancements in AI certainly possess the ability to better the lives of many, risks surrounding data privacy, the use of AI in combat, the disruption of traditional employment models, and increasing centralization must also be addressed.

With the trend of increasing digitalization, automation, and mechanization come concerns surrounding how large tech companies utilize the data that they mine. The issue of data privacy accurately reflects some of the larger ethical questions which must be considered with the continued development of AI. Companies such as Facebook, Google, or Amazon which house massive amounts of data on their customers are grappling with this issue currently. The inherent problem of data-privacy lies in the issue of who controls this data, and how it will be used. Without a continued emphasis on data-protection, and the decentralization of power, it is not difficult to foresee a totalitarian future where citizens with dissenting viewpoints can easily be found and eliminated, having already given all of their data over to a larger regime. While this may sound outlandish to some, if history is a teacher, it is certainly not impossible. Totalitarianism has emerged before the use of AI and data harvesting; however, with the aid of these new tools, such control would certainly be more feasible. Therefore the ethical implications of these new devices extend far greater than any other type of human invention.

The issue of data-privacy represents one front of the technological ethic which must be explored. Maj. Gen. Robert H. Latiff (Ret.) author of Future War: Preparing for the New Global Battlefield argued a similar point citing a 1968 Foreign Policy Association conference at which someone remarked, “‘It is much too optimistic’, he said ‘to assume that the technologies we provide to governments would entail their ability to use them wisely.’ He noted correctly that ‘applying technology, like all human efforts, bears bittersweet fruits.’ We have seen this in spades, and I think we will see this same result played-out in the next 50 years’”.

While the bittersweet fruits of technology are not foreign to mankind, the further we advance technologically, the greater impact these technologies have. As the impact of technology grows to a larger and larger extent the call for an ethic to guide these applications grows ever louder. There appear to be two distinct ethical arguments in this modern age. The first being a call for a new ethic to guide us, while the second calls for a continuation of traditional ethical systems into the future. As political science professor at Duquesne University Charles T. Rubin pointed out, the call for a new ethic would need to rest upon the claim that human nature is transient as opposed to stagnant. While life expectancy and infrastructure have certainly reshaped human life, Rubin asserts that human nature remains fixed, and that novel circumstances do not beget the need for a novel ethic.

Ever capable of innovation, building upon the work of others, and discovering ways of perpetual self improvement, mans nature while creative, is fixed within a relatively constant set of behavioral archetypes. This fixed nature catalyzes the need for a reapplication of traditional values, a perpetual reaffirmation of the old ethical framework which is oriented around the preservation of human life. As the development of AI continues, the ethical questions that face individuals at the forefront of this technology will continue to grow in scale and impact. Thus the increased emphasis on the paramount ethical values will be vital to man’s continued forward progress.

Another foreseeable problem which goes hand-in-hand with the trend toward increased automation is the substantial disruption of traditional labor models. While the conversation surrounding this potentially impending dilemma has led to discussions on monetary policies such as a Universal Basic Income, there are several reasons why this solution should be avoided at all costs. The Universal Basic Income model falls short on two fronts. Primarily, it opens the door for a hyperinflationary economy. Under this model, with a government as the sole distributor of income to large portions of the non-working economy, and already staggering amounts of debt existing at every level, the only way for governments to complete such a plan would be through rampant inflationary practices. As mentioned by Futurist Institute chairman Jason Schenker, the direct effects of this type of policy would be a dramatic devaluation of the currency in question, a substantial upward movement in prices, and ultimately (on a long enough time horizon) economic collapse. The second area where this model falls short stands in the realm of coercion. Under this Universal Basic Income model, large portions of the population would be completely reliant on the government for their sustenance, this reliance could easily morph into coercion, as governments would have an unbalanced amount of power over the citizenry.

The problem of unemployment in our not-too-distant-automated future certainly demands to be addressed, and the solution to this is two-fold. Primarily, structured systems need to be created in tax-law and larger society which incentivize the employment of humans over machines for certain positions. While current tax codes incentivize automation, social systems would be prudent to adopt legal incentive structures that leave room for automation while simultaneously incentivizing the use of a human workforce for certain roles. Secondly, the adoption of a sound monetary system which is immune to inflationary policies or excessive government intervention would allow for substantial capital accumulation, more effective wealth transfers into the future, and flows of capital to their most efficient use-cases. Blockchain based protocols such as Bitcoin provide the security, immutability, and deterministic monetary policy necessary to provide such a sound monetary system.

While dissenters and proponents of Artificial Intelligence point to ample reason for concern and for hope, the ethical questions surrounding AI are not new questions but continuations of ethical questions which have been asked for centuries. The ever growing impact of technological innovation remains bittersweet in that vast amounts of good or evil can be unleashed. Thus, the ultimate ethical dilemma of Artificial Intelligence does not hang on the ethicality of its development, but on the extension of traditional ethical questions to a far greater reaching impact. This growing impact calls mankind to challenge itself and reaffirm ethical value systems in order that civilization may race to further innovations without crushing the human spirit in the process.

--

--

Alexander Harvey

Interested in Philosophy, Capital Markets, Economics, Bitcoin.