Discussion: (0 comments)
There are no comments available.
A public policy blog from AEI
The latest on technology policy from AEI, published daily.
More options: Share,
Hardly a day goes by any more without someone somewhere either extolling the virtues of, or prophesying doom for humankind as a consequence of, applications deploying artificial intelligence (AI) or its bedfellow, the infamous autonomous insert-anything you-like-here (for the balance of this blog, Autonomous Apparatus or “AA”). Autonomous vehicles, for example, will apparently liberate a new wave of productivity increases to sweep through the transportation industry, while at the same time cutting a swathe across the livelihoods of truck drivers. In a similar vein, self-learning algorithms mining big data repositories can provide us with ever-better recommendations from burgeoning catalogs of video and audio content, at the same time as they push content an Orwellian Big Brother might (or might not) want us to see.
For good or bad (or better or worse), AI and AA are features of a future in which humans will be required to adapt. To paraphrase a famous crossing of technological barriers 50 years ago, “one small step by algorithm, one giant adjustment for humankind.”
Extending the envelope
The increasing use of artificially intelligent and autonomous systems begs the question of how many, and what sorts of, activities and decision-making otherwise undertaken by humans can conceivably be handed over to technology. Within the world of blockchain technologies and smart contracts, it has been proposed that autonomous firms — decentralized autonomous organizations, or DAOs — can be created, without the need for human owners or human intervention.
Conceptually, DAOs take literally Ronald Coase’s observation that a firm is a nexus of contracts. If the relevant contracts are “smart” ones encoded on a blockchain and executed automatically, then it seems but a simple step to create a completely autonomous firm. In this view, a “DAO has only a single interest to protect: that of the business itself. It requires no employees or executive managers, thereby providing a service without consideration of salaries, intermediaries, or even profits. Businesses can survive on the most razor-thin margins imaginable, and only need to cover the cost of existing — nothing more.”
Sealing humans’ fate?
However appealing such a view might seem, it overlooks the fact that firms are, as complex adaptive systems, far more than the sum of their transactional parts. They are institutions created to serve specific human purposes.
In essence, firms are legal institutions created to allow their constituent individual human ‘members’ to act collectively (that is, as a single body corporate or legal person) for the purpose of engaging in trade (i.e., transactions). They hold assets and liabilities ultimately owned and controlled by their human “members.” Those members (via the governance and management agents appointed by them) are liable for both the financial and wider societal obligations arising from transacting. Even firms with no explicit owners having a defined claim on the assets and surpluses (classic non-profits such as charitable trusts) are incorporated with human trustees who bear responsibility for the firm’s obligations. These humans make decisions about what the firm will do (how its assets will be applied) that have consequences for a wide range of stakeholders beyond the humans that constitute its “body corporate.”
So can a truly autonomous firm be constituted, without any explicit reference to, or engagement with, human actors?
Institutions and their governance arrangements evolve
Although technologically-constructed artificially intelligent or autonomous institutions such as firms — or even complete judicial systems such as Kleros — may appear to some to be on the horizon, the reality is that institutions have evolved over time to meet important human purposes far greater than merely the transactions that they are engaged in. While algorithms may automate some of the elements of transactions, human actors must still play important roles in the governance arrangements within which the “artificially intelligent” algorithms operate and the autonomous transactions are generated.
Undoubtedly, there is much to be learned and considerable scope for evolution of new governance arrangements that take account of the special characteristics of AI and AA. However, they must ultimately serve human purposes, and humans will be both involved in their creation and held accountable for how they achieve these aims — that is, the governance of their operation.
If there is any doubt that humans will continue to be directly accountable for the consequences of the AI and AA they deploy, one needs look no further than Mark Zuckerberg and Facebook in the aftermath of the Christchurch Mosque massacre.
While transactions may be autonomous, institutions remain grounded on the humans whose interests they serve.
There are no comments available.
1789 Massachusetts Avenue, NW, Washington, DC 20036
© 2019 American Enterprise Institute