In modern digital life, unilateral contracts (e.g. terms of service) play a substantial role. Few users, however, read these documents before accepting the terms within. Even “sophisticated” consumers that might be held to higher legal standards—including prominent law professors, consumer law academics, and the Chief Justice of the United States Supreme Court do not read such contracts (Benoliel & Becker 2019). The generally accepted reason for this behavior is that many legal documents (not just these unilateral contracts) are too long and the language too complicated (see e.g., (Williams 2011)). Several empirical studies also confirm that these click through unilateral contracts are generally unreadable with an average reading level corresponding to the usual score of academic articles rather than material targeted at the general public (Benoliel & Becker 2019). The duty to read doctrine of US contract law holds contracting parties responsible for the written terms of their contracts, whether or not they actually read them. The challenge here lies in how to increase meaningful consumer engagement with these terms of service.
Shriver reviewed the literature of the movement towards plain language in US business and government between 1940 and 2015, identifying a corpus or more than 100 documents (Shriver 2017). This review indicated that the plain language movement evolved over this 75year period from a narrow, sentence-based, focus on readability to a broader whole-text approach focused on usability and accessibility of the whole text (whether in paper or electronic form), and whether people trusted the content. Comprehensibility has more aspects than linguistic or stylistic (Zodi 2019). In this context, usability often refers to whether the reader is able to extract and use information from the legal document in some other context, but certainly should include the notion of using that information to make an informed decision to accept or reject a contract. Plain language is not a substitute for consumer engagement, but it may be a prerequisite.
Unilateral contracts exist where one party has significantly more market power than the other such that no contract negotiation takes place. Advocates of unilateral contracts may argue that contract negotiation is point of economic friction that slows economic activity; and should be eliminated. Absent direct competition, counter-party market forces alone seem insufficient to discipline the drafters of these contracts. Some states have enacted plain language statutes, but these are generally of limited scope and lack objective criteria for readability. Consumer contracts and consumer notices are required to be expressed in plain and intelligible language under the Consumer Rights Act of 2015. Determining whether a contract is expressed in plain and intelligible language involves resource intensive work by regulators and difficult adjudications by courts. The technology of natural language processing and text analysis have improved markedly in recent years. A variety of reading scores derived through such analyses are now readily achievable through computerized analysis of these texts. Identification and selection of specific metrics, and standardization of acceptable performance benchmarks remain undone. While reading scores may play a role, further work is needed to reduce these to tools of everyday practice (Conklin et. al. 2019). Automated summarization of legal texts (as opposed to re-writing) has also been proposed to aid comprehensibility, but the current state of this technology does not appear adequate (Manor & Li 2019). Increased consumer engagement with the terms of service may increase the economic friction of the transaction, but from a public policy perspective, this should be balanced with the public good of improved consumer awareness of the contracts they are engaging in.
As Zodi notes, the readability and grammar are insufficient to explain the incomprehensibility of legal texts (Zodi 2019). Improving the plain language of the texts may have some benefits from reducing skipped sections, repeated readings or reading abandonment compared to unimproved legal texts. It does not, however, seem to resolve the general tendency to skim through or read only parts of terms and conditions (Rosetti et.al. 2020). The length of terms and conditions may have some impact on readability scores and the motivation of users to avoid reading them. One approach to increase user engagement with the terms and conditions would be to increase the number of clicks required in proportion to the length of the terms and conditions. The intuition here being that longer documents may be more likely to have significant clauses worthy of consumer attention and consideration. Such an approach also creates some incentives for the developers of these unilateral contracts to reduce their length in order to reduce the economic friction from some number of clicks. Contract length is, of course, a crude metric; but this simplicity enables easier mechanical reduction to practice. More sophisticated approaches are certainly possible based on the semantics of the terms of the service contract. For example, the terms of service could be segregated into some chunks that are more meaningful or impactful to the consumer e.g. those terms that impact or constrain the options available to the consumer vs the service provider. Such approaches are currently less tractable for automated chunking of the contract text, but that may change with improvements in technologies like natural language processing and text analytics.
References
Benoliel, U., & Becher, S. I. (2019). The duty to read the unreadable. BCL Rev., 60, 2255.
Williams, C. (2011). Legal English and Plain language: an update. ESP Across Cultures, 8, 139-151.
Schriver, K. A. (2017). Plain language in the US gains momentum: 1940–2015. Ieee transactions on professional communication, 60(4), 343-383.
Ződi, Z. (2019). The limits of plain legal language: understanding the comprehensible style in law. International Journal of Law in Context, 15(3), 246-262.
Conklin, K., Hyde, R., & Parente, F. (2019). Assessing plain and intelligible language in the Consumer Rights Act: a role for reading scores?. Legal Studies.
Manor, L., & Li, J. J. (2019). Plain english summarization of contracts. arXiv preprint arXiv:1906.00424.
Rossetti, A., Cadwell, P., & O’Brien, S. (2020). ” The terms and conditions came back to bite”: plain language and online financial content for older adults. Available online at: http://doras.dcu.ie/24532/
Presenting my paper on: “ClickThru Terms of Service: Blockchain smart contracts to improve consumer engagement?”
Technology entrepreneurship has enabled the widespread commercial adoption of internet technologies. These internet technologies have reformed consumer commercial experiences towards an online environment. As the online consumer experience becomes more predominant, various actors have recognized the significance of developing appropriate regulations for online consumer experiences to reflect various policy objectives including consumer protection. Network efficiencies and large-scale infrastructures enable a single provider to deliver services to mass market consumers. Contract negotiation at such scale is typically not the “meeting of the minds” envisaged by contract law as crafting terms carefully considered by knowledgeable parties. Such services are typically delivered under terms of service developed by the service provider alone; and accepted by the consumer with a single click and little if any consideration.
Consumers typically ignore these terms of service in reliance on consumer protection laws or the courts to ensure fair treatment. Consumer protection laws have focused primarily on requirements directed at the service provider. Common law courts have contract defenses against unconscionable terms, but these rely on community standards of reasonable behavior which may be difficult to ascertain when the adoption of new technologies and practices is not uniform. The successful adoption of new internet-based technologies and commercial practices has encouraged more technology entrepreneurship in a positive feedback cycle.
Electronic signatures have become the norm as transactions increasingly move online, unfortunately with little thought or evaluation by consumers. A swath of new internet-based technologies and commercial practices enabled by blockchains are expected to become mainstream within the near future. Regulatory and Policy decision makers are considering necessary regulatory changes as these technologies evolve to support a greater range of more complex transactions affecting not just financial assets, but also cyber physical infrastructure.
To avoid the problems created by oblivious signatures, some efforts at increasing consumer engagement with the terms of service may be a useful and tractable step towards improved consumer experiences. In comparison, previous efforts focused on the plain language movement may have increased comprehensibility ultimately failed to achieve the necessary consumer attention for a true “meeting of the minds”. Blockchain smart contracts appear to provide promising capabilities to enable greater consumer engagement with the terms and conditions of the online services by enabling e.g. multiple signatures per transaction, and more sophisticated transaction logic to verify engagement. If service providers and regulators will also engage, by considering such click through licensing processes through the lens of consumer engagement, consumer orient blockchain smart contracts could become more widespread.
What is Public Relations? The Public Relations Society of America defines it as: “Public relations is a strategic communication process that builds mutually beneficial relationships between organizations and their publics.” Public relations is the art of crafting and delivering messages that inform and persuade the public, and get people to change opinions or take action. According to Public Relations Society of America some of the disciplines/functions within PR:
There was a time when many companies did not see the value of public relations, unless a crisis happened, which, unfortunately, is usually too late. Today, the lines are blurring between the traditional definition of public relations and other forms of marketing. With an increasing portion of commercial activities occurring online, attitudes are changing; and more executives now see PR as a way of earning–and not interrupting– people’s online attention, and with that, gain publicity for free from trusted, unpaid or earned channels. According to the American Marketing Association, “Marketing is the activity, set of institutions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and society at large.” Marketing is much broader than public relations; it involves communicating but is more comprehensive. One approach is to categorize the media assets as owned, earned or paid. Owned media is content and brand assets like images that you create and typically protect with trademarks and copyrights. Paid media is advertising where you are paying some media outlet to place or amplify your messages. Earned media is where others voluntarily share your message. Content marketing often supports PR goals; but content marketing goes beyond public relations. In a world of increasing information sharing, the line between public relations marketing versus social media marketing is often blurry. Both can provide value in positioning your brand messaging within their respective media outlets and audiences, with differences in reach, tone, immediacy and degree of customer engagement required.
Robert Cialdini coined[1] the term Social Proof to describe a social and psychological phenomenon where people copy the actions of others in a given situation. Social Credibility is the ability to connect and engage with other people and may be measured by the number of connections or followers on social media or audience sizes of other online media. Online media and digital marketing are intimately engaged and continuing to evolve. Audiences increasingly rely on online tools such as search engines to locate the information they need for purchase decisions. Establishing social credibility through reinforcement of messages from multiple sources becomes important for both search engines and the humans using them. While definitions and categorizations are helpful to understand the field, for commercial practitioners it is a matter of selecting the appropriate tools to meet the business objectives.
Business objectives for PR campaigns generally revolve around objectives such as: creating/ enhancing online presence; increasing brand credibility; increasing leads/sales/profits; or changing the way people thing about a business. These objectives could also be targeted towards particular audiences e.g. potential customers, investors/ acquirers, employees, regulators etc. For many, the process of crafting the value proposition and messaging is itself of significant value because of the clarity it can bring to the business. This is particularly relevant for early stage technology research commercialization efforts where startups may need to both pivot their strategies and validate their approaches with larger audiences as they attempt to scale. Such startups may have awesome content on their websites but struggle to bridge the chasm between early adopters in their niche and larger audiences who don’t know they exist or what value they can deliver. SEO optimization of the content does not engage broader audiences. Social influencers can provide some attention from their followers, but the fit of their audience may be difficult to match with the firm’s business objectives. Dedicated PR firms can provide value; but these may be expensive for cost conscious startups. Approaches like Macadamia Media may be a cost-effective compromise to raise exposure; and bring the traffic to the startup’s content.
[1] Cialdini, R. B. (1984). Influence: The new psychology of modern persuasion. Morrow.
When the operators issued their whitepaper[1] challenging the industry to address network function virtualization in 2012, the expected benefits included a number of improvements in operating expenses. These included improved operational efficiency by taking advantage of the higher uniformity of the infrastructure via:
IT orchestration mechanisms providing automated installation, scaling-up and scaling out of capacity, and re-use of Virtual Machine (VM) builds.
More uniform staff skill base: The skills base across the industry for operating standard high volume IT servers is much larger and less fragmented than for today’s telecom-specific network equipment.
Reduction in variety of equipment for planning & provisioning. Assuming tools are developed for automation and to deal with the increased software complexity of virtualisation.
Mitigating failures by automated re-configuration and moving network workloads onto spare capacity using IT orchestration mechanisms, hence reducing the cost of 24/7 operations.
More efficiency between IT and Network Operations- shared cloud infrastructure leading to shared operations
Support in-service software upgrade (ISSU) with easy reversion by installing the new version of a Virtualised Network Appliance (VNA) as a new Virtual Machine (VM).
While there have been a number of studies addressing the potential for capex improvements (see e.g., (Naudts, et.al. 2012)(Kar et.al. 2020)), there are relatively fewer studies in the literature concerning the opex improvements (see e.g., (Hernandez-Valencia et.al. 2015)(Bouras et.al. 2015)(Karakus & Durresi 2019)). At least partly, this reflects the commercial sensitivity of expense data at network operators. Headcount is a significant cost factor in operations. Opex improvements could imply headcount reductions which would also make the topic sensitive for network operator staff.
The transformative nature of NFV, transitioning equipment spend from custom hardware to software on generic computing infrastructure, generated significant interest and rhetoric at the time (see e.g., Li & Chen 2015)), but other new technology introductions have also claimed significant opex improvements (e.g., GMPLS (Pasqualini et.al. 2005)). Telecommunications operators are large-scale businesses, so opex reductions are an ongoing area of focus. The telecom industry is characterized by significant capital investments in infrastructure leading to significant debt loads. Average industry debt ratios have been in the range 0.69 to 0.81 over the past few years (readyratios.com), implying operating expenses would include a significant component for depreciation and amortization. Examining the annual reports of tier 1 carriers shows depreciation and amortization in the range of 15-20% of operating expenses. Telecom services are mass market services, implying significant sales expenses to reach the mass market. Examining the annual reports of tier 1 carriers shows sales, general and administrative costs are on the order of 25% of operating expenses. The operations efficiency improvements expected for NFV don’t impact Depreciation or SGA expenses, hence at most they can impact the remaining 55-60% of the company’s total operating expenses.
(Bouras et. al. 2016) expected opex reduction of 63% compared to their baseline, but it is not clear how that would relate to reportable operating expenses for the company. (Hernandez-Valencia, et.al. 2015) also provided numerical percentage ranges for expected savings in a number of areas, but the relation to reportable operating expenses for the company is similarly unclear. Other studies (Karakus & Durresi 2019) (Pasqualini et al 2005) identified factors affecting operating expenses but did not have consistent terminology or scope in the operating expense factors identified. Environmental operations costs of power and real estate were identified by (Hernandez-Valencia, et.al. 2015) and (Pasqualini et al 2005), but (Karakus & Durresi 2019) refer only to energy related costs. (Hernandez-Valencia, et.al. 2015) identified service operations costs of assurance and onboarding; (Karakus & Durresi 2019) identified service provisioning; and (Pasqualini et al 2005) referred to service management processes – SLA negotiations, service provisioning, service cessation, service move/change.
The lack of consistent operating cost models may be explained by variation across operators. Service definitions and designs may be different across operators. Environmental operations expenses like real estate and power could be affected significantly by operators’ preferences for private vs public cloud infrastructures. The design of operations reflects company’s strategic choices on what to capitalize as fixed infrastructure and may be influenced by other factors (e.g. tax policies, regulatory regimes). Numerical targets for opex reductions seem difficult to generalize across organization. Even within a single organization, tracking such targets at the corporate level may be significantly impacted by other corporate activities (e.g., M&A) that impact reportable metrics. A better approach may be to focus on improvements in particular operational tasks that can be generalized across multiple operators and architectures.
References
Naudts, B., Kind, M., Westphal, F. J., Verbrugge, S., Colle, D., & Pickavet, M. (2012, October). Techno-economic analysis of software defined networking as architecture for the virtualization of a mobile network. In 2012 European workshop on software defined networking (pp. 67-72). IEEE.
Kar, B., Wu, E. H. K., & Lin, Y. D. (2020). Communication and Computing Cost Optimization of Meshed Hierarchical NFV Datacenters. IEEE Access, 8, 94795-94809.
Hernandez-Valencia, E., Izzo, S., & Polonsky, B. (2015). How will NFV/SDN transform service provider opex? IEEE Network, 29(3), 60-67.
Bouras, C., Ntarzanos, P., & Papazois, A. (2016, October). Cost modeling for SDN/NFV based mobile 5G networks. In 2016 8th international congress on ultra-modern telecommunications and control systems and workshops (ICUMT) (pp. 56-61). IEEE.
Karakus, M., & Durresi, A. (2019). An economic framework for analysis of network architectures: SDN and MPLS cases. Journal of Network and Computer Applications, 136, 132-146.
Li, Y., & Chen, M. (2015). Software-defined network function virtualization: A survey. IEEE Access, 3, 2542-2553.
Pasqualini, S., Kirstadter, A., Iselt, A., Chahine, R., Verbrugge, S., Colle, D., … & Demeester, P. (2005). Influence of GMPLS on network providers’ operational expenditures: a quantitative study. IEEE Communications Magazine, 43(7), 28-38.
You have a product and need to find a way to get it in front of the right people. Basically, a Go-To-Market (GTM) strategy is a comprehensive action plan that details how. A GTM strategy is a business tool and a critical component of the organization’s business plans. More specifically, a GTM strategy is the plan of an organization, to deliver their unique value proposition to customers. Managers, product marketing specialists, and other decision-makers use the GTM strategy to coordinate their efforts and ensure a smooth launch of a new product, entry into an unfamiliar market, or the re-launch of a former brand/company. A regular marketing strategy is intended to be a long term set of rules, principles, and goals set in place to guide all of your messaging through the 5Ps of the marketing mix: Product/Price/Promotion/Place/People. A GTM strategy is a (relatively) short term, step-by-step map that focuses on launching one specific product. While each product has a different strategy, the end-goal is the same – to achieve a competitive advantage by optimizing the choices inherent in delivering the value proposition to. If your product is Point A and your customer is Point B, then a GTM strategy can be described as everything that happens along the path between the two. There may be lots of different paths, but a good GTM strategy is the plan for targeting the right pain point with the right sales and marketing processes, so you can grow your business at the optimum pace.
The components of a GTM strategy include figuring out marketing segmentation and messaging, a sales method, your ideal customer base, attractive pricing, and the unique problem your product solves or improves. This may involve engaging with a new market, or, may simply be presenting a new idea to your existing client base. The pricing strategy and the distribution plan aspects will certainly impact the results, but it is easy to bias these with company constraints if you start there. Today, however, businesses need to start with the customer before building pricing and distribution strategies. Your Target Market should provide a clear definition of your target audience. This definition involves the demographic, psychographic, geographical, and other variables that can help you narrow down your focus. While statistical data can provide some perspectives here, you’ll also need to create buyer personas to pin-point the ideal profiles that you want to target. Not all segments of the total market will be equally attractive. Market can be segmented in a variety of ways; but comparing the segments in terms of the business value vs implementation complexity can help focus on particular segment (e.g. between easy wins from those segments with high business value and low implementation complexity). All markets have their unique aspects, but broad categories such as Business to Business (B2B), Business to Consumer (B2C) or platforms supporting Consumer to Consumer (C2C) transactions can be helpful because target markets in these categories tend to have similar scale and regulatory issues. For example, B2B transactions are different to consumer transactions with an average of seven people involved in every business buying decision e.g.:
The initiator (who identifies your product/service as relevant)
The End User
The Buyer (funding the purchase)
The Decision maker (approving the purchase)
The final approver (depending on the organizations schedule of authorizations)
The Influencer (convincing decision makers of the purchase need)
The gatekeeper (who can kill the purchase decision for other reasons e.g. compliance with corporate security policies)
The Value Proposition and Product Messaging (the problems it solves, etc.) are two other key components of your GTM strategy. These help you position your brand and provide clarity to your potential customers. The value proposition can be thought of as a compelling story that helps customers understand why they need the product or service to address a particular pain point. Developing buyer personas around these pain points can help clarify the value proposition and product messaging around that pain point. In the B2B context, additional personas for the other people involved in business buying decisions can be helpful. For example, an end user pain point might be the time taken on a particular operational task. The value proposition can then be derived around the time saved and appropriate messaging developed to emphasize saving time on this task. The pain point for an influence might be considerably different (e.g. the quality of data obtained from the operation) hence requiring different value proposition and messaging.
For those of us who are attempting to build a new business, an incorrect or suboptimal GTM strategy can cost years in going the wrong direction with product development and marketing. Having a GTM strategy helps you keep a realistic, practical perspective, and lets you identify and pay appropriate attention to the less-exciting bits that are still fundamental to your success, if it is developed with quantifiable data rather than “gut feel”. Crucially, a solid and comprehensive GTM strategy will also give you a framework for measuring your progress along the way, and help you detect and diagnose any issues that are hampering your success before they have the chance to run your venture into the ground. Identifying appropriate metrics and benchmarks can help you evaluate the performance of implementation efforts, as well as validate the GTM strategy itself. Some interesting metrics include: pipeline coverage (ratio of prospects earlier in the pipeline to forecasted sales), Sales team performance (% above vs below forecasted quota); lead conversion rates, marketing /sales budgets as % of revenue.
Commercializing technology research obviously requires a GTM strategy when planning for commercial success. Any particular GTM strategy would depend on the specific circumstances of product/service characteristics, targeted markets, company resources etc. Even with a customized GTM strategy in hand, research commercialization efforts can experience difficulty gaining attention/traction in their target markets for a variety of reasons; but failing to develop an adequate GTM strategy significantly reduces the chances of success. An often overlooked aspect of the GTM strategy for startups is the role of public relations in establishing an online presence, building a brand and messaging the key value propositions. If you would like assistance developing your GTM strategy you can contact me.
The development of telegraphy brought about the same types of dispute that occur in the era of the internet, and judges were required then, as now, to adapt old laws to new technologies. Exchanges of telegrams generally held to provide evidence of intent to contract within the relevant statute of frauds. Documents sent by facsimile transmission have also generally been accepted in common law jurisdictions. There have been cases where judges have implied that a document has been signed even in the absence of a manuscript signature, where there is sufficient evidence to show the person signing the document adopted the content. Many bureaucratic processes have forms requiring signatures, where there may be no legal authority or requirement for a signature (see e.g. Wynn & Israel 2018). Manuscript signatures have been forged. To test both the validity and the effectiveness of a manuscript signature, various jurisdictions have required the signatures on some classes of documents (e.g., wills, land conveyances, high value contracts, etc.) to be affixed in the presence of a witness or an authorized official, such as a notary. The function of the witness or notary is to provide additional evidence assuring the validity of the signature on the document. This generally requires that witnesses or notaries be independent and have no conflicts of interest (e.g., medical staff witnessing patient legal documents (Starr 2016)). Witnesses and notarial acts are generally required to authenticate a signature for legal purposes.
Sometimes there is a legal need for some form of official to authenticate a document, e.g., for a document to be accepted in some foreign legal processes. Georgia has two separate and distinct state agencies authorized[1] to authenticate documents for use by foreign countries etc. If the document authentication is going to a foreign country that is a participant in Hague Treaty Convention then obtain an Apostille via Georgia Superior Court Clerks. If the document authentication is going to a foreign country that is not a participant in Hague Treaty Convention then authenticate via the Georgia Secretary of State with a Great Seal Certification. For domestic use, a notarial act with Certification of notary public. (Certificate of authority from County court that Notary is authorized by the county) is generally sufficient.
Signatures for legal entities
Legal entities (Companies, LLCs, partnerships, etc.) have many of the same rights and obligations of natural persons that may include a need for a signature e.g., to enter into binding contracts, to endorse government required documents. When signing for the corporation, a simple signature line with the name of the corporate officer is not the legally acceptable method for signature; instead, the signature must be presented in a signature block with the name of the corporation and the name, title and signature of a corporate officer.
Legal entities can have their own signatures distinct from a natural person. Most states have enabled a Company Seal in their statutes enabling corporations. Generally, the Company Seal acts as a signature executing a document on behalf of the corporation, but see
The seal of the corporation may be affixed to any document executed by the corporation, but the absence of the seal shall not itself impair the validity of the document or of any action taken in pursuance thereof or in reliance thereon. .
(O.C.G.A. 14-3-846 (c))
In many cases the Company Seal alone is insufficient evidence for company endorsement actions. Execution of instruments conveying interest in real property or releasing security agreements requires natural person signatures, often two signatures (e.g., Company President and Company Secretary). The Company Seal is not necessary. (O.C.G.A. 14-5-7 (2010))
When a corporation is doing business, it must duly authorize each transaction. Entering contracts, concluding loans and endorsing checks or drafts all require the signature of a corporate officer with the authority to conduct business transactions on behalf of the company. Determining what constitutes a legal signature for a corporation may involve reading the bylaws, securing a board resolution or requesting some other certification of authority. Board resolutions also may give general authority to act on behalf of the corporation or more limited powers to transact business. (e.g., a schedule of authorizations). In some instances, illegal signatures will bind a corporation to protect the interests of innocent third parties. Apparent authority may also exist if two officers of the same corporation, such as the secretary and president, endorse an instrument. Bylaws and board resolutions are not filed with the Secretary of State in the state of incorporation, so you may have to request a copy from the corporation itself.
Notarial acts on signatures
For Notary Law in Georgia, see generally, O.C.G.A. 45-17-1 (2010). Notaries Public have authority anywhere within the State of Georgia to:
Witness or attest signature or execution of deeds and other written instruments;
Take acknowledgments;
Administer oaths and affirmations in all matters incidental to their duties as commercial officers and all other oaths and affirmations which are not by law required to be administered by a particular officer;
Witness affidavits upon oath or affirmation;
Take verifications upon oath or affirmation;
Make certified copies, provided that the document presented for copying is an original document and is neither a public record nor a publicly recorded document, certified copies of which are available from an official source other than a notary; and provided that the document was photocopied under supervision of the notary; and
Perform such other acts as notaries are authorized to perform by the laws of the State of Georgia.
A “notarial act” means any act that a notary is authorized to perform and includes, without limitation, attestation, the taking of an acknowledgment, the administration of an oath or affirmation, the taking of a verification upon an oath or affirmation, and the certification of a copy. All notarial acts must be accompanied by the Notary’s seal. “Attesting” and “attestation” are synonymous and mean the notarial act of witnessing or attesting a signature or execution of a deed or other written instrument, where such notarial act does not involve the taking of an acknowledgment, the administering of an oath or affirmation, the taking of a verification, or the certification of a copy. “Notarial certificate” means the notary’s documentation of a notarial act.
Notaries are commissioned for a four-year term by the Clerk of Superior Court in their county. Requirements are generally age, legal residency of the state (or bordering state, but employed in GA), be able to read and write English. The process[2] in GA requires an application form, fee, oath of office, notary seal.
The advent of COVID-19 has accelerated a trend towards electronic signatures and records. Groups like the Electronic Signature and Records Association[3] have been promoting uniform statutes and rules in areas such as eNotary, IRS eSignature policy, Electronic Will legislation, Blockchain, etc. Remote Online Notaries (RON) have been enabled in many states[4] as a result of the COVID-19 outbreak. This authorization may be via RON legislation or emergency authorizations by State Authorities. For example, in Arizona, ARS 44-7011 allows completion of notarial act on an electronic message or document without imprint of notary seal. In Georgia, the Executive Order[5] by Governor Kemp (dated 3/31/2020) authorizes use of real time audiovisual communications for notarizations.
Legislative activity is often broadly framed so as to enable a variety of technical and commercial solutions to evolve. Public Key cryptography has emerged as a technology with widespread commercial adoption. Other technologies like blockchains build on public key cryptography to enable larger systems. Cryptography research also continues to enable new capabilities in fields like security and privacy; but also more specifically relevant to signature process roles like witnesses and notaries.
References
Starr, K. T. (2016). Should you witness a signature on a patient’s personal legal document?. Nursing2019, 46(12), 14.
Wynn, L. L., & Israel, M. (2018). The fetishes of consent: Signatures, paper, and writing in research ethics review. American Anthropologist, 120(4), 795-806.
Smart contracts and other blockchain transactions (e.g., transfers of assets represented by digital tokens) are purported to have legal significance. That legal significance hinges on assent by the parties usually captured in a contract signature. Blockchains, distributed ledgers, smart contracts and similar technologies rely on cryptographic signatures for authentication and authorization of account transactions. Cryptographic keys are often considered as signatures representing account holder identities. Technologists speak of “signing” documents with keys in public key cryptography. These cryptographic signature operations do not always have all the same characteristics as traditional manuscript signatures.
The function of a signature is generally determined by the nature and content of the document to which it is affixed (e.g., indicating agreement or endorsement of the material above the signature). Historically, manuscript signatures have included a variety of formats – handwritten names, initials, symbol (e.g., X). In legal context, a signature can provide a number of functions – primary evidential functions, secondary evidential functions, cautionary functions, protective functions, channeling functions, record keeping functions (Mason 2016, Ch 1). Evidence relevant to manuscript signatures includes identity of person affixing the signature, intention to authenticate and adopt the document. For legal entities (partnerships, corporations etc.), the scope of signature authority can also be a significant factor. Several legal defenses may be applicable to manuscript signatures: forgery, conditionality, misrepresentation, not an act of the person signing, mental incapacity, mistake, document altered after signature, person signing did not realize the document had legal significance, and other defenses based on unreasonableness or unfairness.
The ESIGN Act (2000) established the general validity of electronic signatures, and electronic contracts. A definition for an electronic signature is provided as:
The term “electronic signature” means an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record. (15 USC 7006 (5))
An example of an electronic signature is a biometric signature. A biometric signature is a binary coded representation of a person’s biometric characteristics used for authentication purposes in distributed computing systems (Bromme 2003). Another example definition of a type of electronic signature is the S-signature from the USPTO:
(d)(2) S-signature. An S-signature is a signature inserted between forward slash marks, but not a handwritten signature … (i)The S-signature must consist only of letters, or Arabic numerals, or both, with appropriate spaces and commas, periods, apostrophes, or hyphens for punctuation… (e.g., /Dr. James T. Jones, Jr./)..
(iii) The signer’s name must be:
(A) Presented in printed or typed form preferably immediately below or adjacent the S-signature, and
(B) Reasonably specific enough so that the identity of the signer can be readily recognized. (37 CFR Sec. 1.4)
Other Federal regulations (e.g., CFTC regulations, see 17 CFR Part 1 Sec. 1.3)havevery similar definitions for electronic signatures to the ESIGN Act. The UETA electronic signature definition (Uniform Law Commission 2019, Sect 2 (7) (8)) as enacted by most states also has very similar language to the ESIGN Act (see e.g., Georgia’s O.C.G.A. 10-12-2 (2010)). Arizona, Nevada, Tennessee, however, have amended their UETA statutes to incorporate blockchain and smart contracts (see e.g. A.R.S. 44-7061). The Uniform Law Commission guidance (2019b) considered this redundant and subject to preemption by the federal act. Beyond electronic signatures, the FDA regulation identify and distinguish a digital signature:
(5) Digital signature means an electronic signature based upon cryptographic methods of originator authentication, computed by using a set of rules and a set of parameters such that the identity of the signer and the integrity of the data can be verified.
(7) Electronic signature means a computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an individual to be the legally binding equivalent of the individual’s handwritten signature. (21 CFR Sec. 11.3)
Internationally, this distinction between electronic and digital signatures is also captured in UNCITRAL’s Model Law on Electronic Signatures (United Nations 2001) and its associated guide which examined various electronic signature techniques that purported to provide functional equivalents to (a) handwritten signatures; and (b) other kinds of authentication mechanisms used in a paper-based environment (e.g. seals or stamps). Electronic signatures were categorized into digital signatures based on public key cryptography and other electronic signature mechanisms (e.g., biometrics, PINs, clicking an acknowledgement box, etc.). NIST’s Digital Signature Standard (Barker 2013) defines a digital signature algorithm based on the work of (Rivest, Shamir and Adelman 1978). ANSI (1998) also defines an algorithm based on Elliptic curve cryptography. Other jurisdictions have similar standards developed by other standards bodies (e.g., ETSI, ISO).
Use of cryptography for authentication purposes by producing a digital signature does not necessarily imply the use of cryptography to make any information confidential, since the encrypted digital signature may be merely appended to a non-encrypted message. A “hash function”, is used in both creating and verifying a digital signature. A hash function is a mathematical process, based on an algorithm which creates a standard length, compressed, substantially unique, digital representation (often referred to as a “message digest”, “hash value” or “hash result) of the message. The most common algorithms for encryption are based on an important feature of large prime numbers: once they are multiplied together to produce a new number, it is particularly difficult and time-consuming to determine which two prime numbers created that new, larger number. The cryptographic algorithms such as RSA or Elliptic curves have no publicly-known methods for rapid decryption of the keys. Brute force approaches relying on massive computation resources become more feasible with technology trends (e.g., Moore’s law) reducing the cost of computing and the commercial availability of massive cloud computing resources (e.g., Microsoft Azure, Amazon EC2 etc.). Quantum computing developments also threaten to undermine these algorithms. Hence there has been recent interest in improved (“Post Quantum”) cryptographic algorithms (see e.g., Alagic et al 2019).
Cryptographic signatures, then, are the basis that blockchains and smart contracts rely on for asserting the legal significance of transactions binding parties to blockchain transactions. Beyond the creation of a signature it is the operations and processes around those cryptographic signatures (in contrast to the operations and processes around manuscript signatures) that sustains any legal significance to these bits of information.
References
Alagic, G., Alperin-Sheriff, J. M., Apon, D. C., Cooper, D. A., Dang, Q. H., Miller, C. A., … Robinson, A. Y. (2019). Status Report on the First Round of the NIST Post-Quantum Cryptography Standardization Process. NIST Interagency/Internal Report (NISTIR) – 8240.
ANSI, (1998). X. 63: Public Key Cryptography for the Financial Services Industry, Key Agreement and Key Transport Using Elliptic Curve Cryptography. American National Standards Institute.
Barker, E. B. (2013). Digital Signature Standard (DSS). Federal Inf. Process. Stds. (NIST FIPS) – 186-4.
Bromme, A. (2003, July). A classification of biometric signatures. In 2003 International Conference on Multimedia and Expo. ICME’03. Proceedings (Cat. No. 03TH8698) (Vol. 3, pp. III-17). IEEE.
Mason, S. (2016). Electronic signatures in law . University of London Press.
Rivest, R. L., Shamir, A., & Adleman, L. (1978). A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2), 120–126.
Buterin’s white paper [Buterin 2014] described smart contracts as “systems which automatically move digital assets according to arbitrary pre-specified rules”. [ISO 2019] defined an “asset” anything that has value to a stakeholder and a “digital asset” as one that that exists only in digital form or which is the digital representation of another asset. Similarly, a “Token” digital asset as representing a collection of entitlements. The tokenization of assets refers to the process of issuing a blockchain token (specifically, a security token) that digitally represents a real tradable asset—in many ways similar to the traditional process of securitization [Deloitte 2018]. A security token could thus represent an ownership share in a company, real estate, or other financial asset. These security tokens can then be traded on a secondary market. A security token is also capable of having the token-holder’s rights embedded directly onto the token, and immutably recorded on blockchain. Asset administration using smart contracts is emerging as a viable mechanism for digitized or tokenized assets though smart contracts may also have other purposes.
Recall that smart contracts started as an enhancement providing a programmable virtual machine in the context of blockchains, and the initial applications of blockchains were cryptocurrencies. Cryptocurrencies have been recognized as commodities for the purpose of regulations on derivatives like options and futures. High-value smart contracts on cryptocurrency derivatives require substantial legal protection and often utilize standardized legal documentation provided by the International Swaps and Derivatives Association (ISDA). Smart contracts managing cryptocurrency derivatives aim to automate many aspects of the provisions of the ISDA legal documentation [Clack 2019]. There have been a number of efforts to extend blockchains and smart contracts beyond cryptocurrency applications to manage other types of assets. Initially these were custom dApps, but as interest in smart contracts for specific types of assets grew, then a corresponding interest developed in having common token representations for particular types of asset, enabling broader interoperability, and reducing custom legal and development risks and costs. Rather than having specialized blockchains for supply chain provenance and other for smart derivatives contracts, different tokens representing those asset classes can be managed by a smart contract independently of the underlying blockchain technology.
Not all tokens are intrinsically valuable, many derive their value by reference from some underlying asset. [Bartoletti 2017] classifies smart contracts by application domain as financial, notary (leveraging the immutability of the blockchain to memorialize some data), games (of skill or chance), wallet (managing accounts, sometimes with multiple owners), library (for general purpose operations e.g. math and string transformations) and unclassified (The financial and notary categories had the most contracts.). The notary smart contracts enabled the smart contracts to manage non cryptocurrency assets. [Alharby 2018] classified smart contract literature using a keyword technique into: security, privacy, software engineering, application (e.g. IoT), performance & scalability and other smart contract related topics. The application domains were identified as: Internet of Thing (IoT), cloud computing, financial, data (e.g., data sharing, data management, data indexing, data integrity check, data provenance), healthcare, access control & authentication and other applications. [Rouhani 2019] categorized decentralized applications in seven main groups include healthcare, IoT, identity management, record keeping, supply chain, BPM, and voting. However, the blockchain-based applications are not limited to these groups. Keywords and application identification, provide a view of the breadth of applications, but these are not exclusive or finite categories – new applications or keywords can always be developed extending those lists. Token based music platforms have been proposed [Mouloud 2019]. Networked digital sharing economy services enable the effective and efficient sharing of vehicles, housing, and everyday objects utilizes a blockchain ledger and smart contracting technologies to improve peer trust and limit the number of required intermediaries, respectively [Fedosov 2018]. The tokenization of sustainable infrastructure can address some of the fundamental challenges in the financing of the asset class, such as lack of liquidity, high transactions costs and limited transparency [Uzoski 2019]. Tokens can also be useful from a privacy perspective. Tokenization, the process of converting a piece of data into a random string of characters known as a token, can be used to protect sensitive data by substituting non-sensitive data. The token serves merely as a reference to the original data; but does not determine those original data values. The advantage of tokens, from a privacy perspective, is that there is no mathematical relationship to the real data that they represent. The real data values cannot be obtained through reversal [Morrow 2019]. If the costs of tokenizing and marketing new asset classes are lower than costs of traditional securities offerings, then this can enable securitization of new asset classes. The ability to subdivide the some types of tokens may enable wider markets through reduced minimum bid sizes.
In 2004, early proposals were made for XML data type definitions to capture electronic contracts [Krishna 2004]. In 2015, the Ethereum developer community adopted ERC-20 (which specifies a common interface for fungible tokens that are divisible and not distinguishable) to ensure interoperability [Vogelsteller 2015]. While the initial token applications may have been for cryptocurrencies, blockchains (especially ethereum) are being applied in a lot of other domains, and so assets administered by the smart contracts are being stretched beyond their original purpose to enable new applications. Studies of trading patterns would need to distinguish whether those tokens were all being used to represent the same kind of asset to be able to make valid inferences about a particular market for that asset. Stretching beyond the fungible cryptocurrencies to enable popular new blockchain applications like tracking supply chain provenance requires a a different kind to token, a non-fungible token. Non-fungible tokens (NFTs) are a new type of unique and indivisible blockchain-based tokens introduced in late 2017 [Regner 2019]. The Ethereum community in 2018 adopted ERC-721 which extends the common interface for tokens by additional functions to ensure that tokens based on it are distinctly non-fungible and thus unique. [Entriken 2018].
In 2018, [FINMA 2018] identified 3 classes of tokens – payment tokens, asset tokens and utility tokens. A utility token as intended to provide access digitally to an application or service, by means of a blockchain-based infrastructure. This may include the ability to exchange the token for the service.
Token functions – payment
{Utility, asset, yield}
Token features – stake rewards
sole medium of exchange
Token distribution –
Initial drops and reservations for miners and service providers
In 2019, [Hong 2019] proposed a non-fungible token structure for use in hyperledger, and [Cai 2019] proposed universal token structure for use in token based blockchain technology. Also in 2019, an industry group, the Token Taxonomy Initiative, proposed a Token Taxonomy Framework [TTI 2019] in an effort to model existing and define new business models based on it. TTI defines a token as a representation of some shared value that is either intrinsically digital or a digital receipt or title for some material item or property and distinguishes that from a wallet which is a repository of tokens attributed to an owner in one or more accounts. TTI classifies tokens based on five characteristics they possess: token type (fungible or not), token unit (fractional, whole, singleton), value type (intrinsic or reference), representation type (common or unique), and template type (Single or hybrid). The base token types are augmented with behaviors and properties captured in a syntax (the token formula). Particular token formulae would be suitable for different business model applications – e.g. loyalty tokens, supply chain SKU tokens, securitized real property tokens, etc. This Token Taxonomy Framework subsumes the functions, features and distribution aspects of the FINMA token model, enabling those regulatory perspectives as well as other properties of particular value in enabling different token-based business models.
The immutable, public Ethereum blockchain enables study of the trading patterns in ERC-20 tokens, revealing trading networks that display strong power-law properties (coinciding with current network theory expectations) [Soumin 2018]. Even though the entire network of token transfers has been claimed to follow a power-law in its degree distribution, many individual token networks do not: they are frequently dominated by a single hub and spoke pattern. When considering initial token recipients and path distances to exchanges, a large part of the activity is directed towards these central instances, but many owners never transfer their tokens at all [Victor 2019]. There is strong evidence of a positive relationship between the price of ether and price of the blockchain tokens. Token function does impact token price, over a time period that spans both boom and bust. The designed connection can be effective; linking a project that has a value, with a token that has a price, specifically in the absence of a legal connection or claim [Lo 2019]. From these preliminary studies, tokens seem to exhibit some of the trading properties of regular securities. Many of these initial tokens have no or untested legal connections to the underlying assets. While consistent behavior in boom and bust is important for an investment, from a legal perspective, the predictability of outcomes for asset recovery during more stressful events (e.g. bankruptcy) may be more important.
A point of concern is understanding how tokens representing value will remain linked to the real asset that they represent. For example, imagine if you own tokens representing a small fraction of a set of gold coins at a bank, and some coins are stolen. Or the reverse – who owns the gold coins if the account keys are lost or the token destroyed? Being able to rationally predict what happens to your token and to the other token owners is crucially important, since the value of tokens becomes greatly undermined if they cannot be proven to be linked to real-world assets [Deloitte 2018]. In these types of cases, off chain enforcement action is required. A typical legal tool for representing such interests in real assets would be recording security interests and liens in real property under the UCC. One approach would be to update the lien recordings for the new owner after each transaction. There are at least two difficulties with this approach. First, the smart contract of today may not be able to interact with manual off-chain legal recordation processes for security interests. Secondly, if the purpose of tokenizing the asset was to increase liquidity, frequent transactions may result in a high volume of updates overloading off-chain manual recordation processes. Another approach would be to use some centralized custody agent (similar to physical custody) and have them hold the lien in the public records as a trustee (perhaps keeping account records reflecting updates from transactions on a blockchain). If the smart contract was a legal entity (e.g., a Vermont style BBLLC), then the BBLLC could be the entity with the security interest in the public records directly. However – the smart contract would need to be able to respond to legal actions on the lien; and may incur other obligations when acting as the custodian (e.g., reporting, insurance, licenses, etc.). The asset custodian as a traditional entity vs the BBLLC dApp provides alternatives for consideration. Traditional asset custodians provide an identifiable party from whom reparations can be sought in the event of asset loss or degradation. Asset custodians are commonly held to a fiduciary standard of care. A BBLLC approach emphasizes a digital distributed trust model; BBLLC’s, however, may be challenged with off-chain enforcement actions and physical custody operations (e.g., physical asset audits). BBLLC custodians may require insurance to protect asset owners in the event of asset losses/degradation.
If ownership of an asset, such as a building, is split among thousands of people, there is little incentive for owners to bear the costs associated with that asset, such as maintenance and ensuring rent is collected [Deloitte 2018]. One can certainly imagine a smart contract detecting that rent has not been credit to an account, but what then can be done in terms of off-chain enforcement? While IoT blockchains can enable significant cyberphysical functionality, typical landlord capabilities of self-help and legal dispossessory actions would seem technically difficult or socially problematic. Some classes of contracts requiring off-chain enforcement actions may not be a good fit for complete implementation by dApp smart contracts at this stage; and may still require human physical agents or other legal entities for some actions.
Because the transaction of tokens is completed with smart contracts, certain parts of the exchange process are automated. For some classes of transactions, this automation can reduce the administrative burden involved in buying and selling, with fewer intermediaries needed, leading to not only faster deal execution, but also lower transaction fees [Deloitte 2018]. Elimination of intermediaries sounds financially efficient; eliminating all intermediaries, however, may not be wise for some classes of assets. An intermediate entity may be useful as a liability shield for remote owners. Consider a tokenized mobile asset (e.g., a drone or terrestrial vehicle) owned and operated via a smart contract, which injures another or their property; most remote owners would insist on some limited liability entity or insurance. While smart contract operated vehicles may not be computationally feasible in the short term, even immobile asset classes like real estate can result in liabilities for the owner (e.g., premises slip and fall). The point being that for some set of physical asset classes, the existence of at least one intermediate entity for the purpose of liability shielding may be desirable. The actions of smart contracts on public blockchains may also raise privacy concerns
By tokenizing financial assets—especially private securities or typically illiquid assets—these tokens can be then be traded on a secondary market of the issuer’s choice, enabling greater portfolio diversification, and capital investment in otherwise illiquid assets. Tokenization could open up investment in assets to a much wider audience through reduced minimum investment amounts and periods. Tokens can be highly divisible, meaning investors can purchase tokens that represent incredibly small percentages of the underlying assets. If each order is cheaper and easier to process, it will open the way for a significant reduction of minimum investment amounts [Deloitte 2018]. Token markets to date have often been via exempt ICOs that are restricted to accredited investors, minimizing regulatory filings, etc. Investment minimums are unlikely a major driver for accredited investors, though enabling investment in diverse, but otherwise illiquid asset classes may be of interest for portfolio diversification. Enabling liquidity for mass market investors would require security token investments to meet the necessary higher regulatory standards for filings and disclosures to bring those investments to the larger public markets. Smart contracts offer efficient process automation for trading and other transactions based on tokenized assets. While this can provide market efficiencies, not all asset classes are ready for tokenization without further consideration. Smart contracts may also need to take on additional behaviors to reflect the increased importance of their role in administering assets. Asset administration using smart contracts is emerging as a viable mechanism for digitized or tokenized assets.
References
[Alharby 2018] M. Alharby, et. al., “Blockchain-based smart contracts: A systematic mapping study of academic research (2018).” Proc. Int’l Conf. on Cloud Computing, Big Data and Blockchain. 2018.
[Bartoletti 2017] M. Bartoletti & L. Pompianu. “An empirical analysis of smart contracts: platforms, applications, and design patterns.” International conference on financial cryptography and data security. Springer, Cham, 2017.
[Cai 2019] T.Cai, et al. “Analysis of Blockchain System With Token-Based Bookkeeping Method.” IEEE Access 7 (2019): 50823-50832.
[Clack 2019] C. Clack, & C. McGonagle. “Smart Derivatives Contracts: the ISDA Master Agreement and the automation of payments and deliveries.” arXiv preprint arXiv:1904.01461 (2019).
[Fedosov 2018] A. Fedosov, et. al., “Sharing physical objects using smart contracts.” Proceedings of the 20th Int’l Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM, 2018.
[Hong 2019] S. Hong, et. al., “Design of Extensible Non-Fungible Token Model in Hyperledger Fabric.” Proc. of the 3rd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers. ACM, 2019.
[Krishna 2004] P. Krishna, et.al., “An EREC framework for e-contract modeling, enactment and monitoring.” Data & Knowledge Engineering51.1 (2004): 31-58.
[Lo 2019] Y. Lo, et. al., “Assets on the Blockchain: An Empirical Study of Tokenomics.” Available at SSRN 3309686 (2019).
[Migliorini 2019] S. Migliorini, et. al., “The Rise of Enforceable Business Processes from the Hashes of Blockchain-Based Smart Contracts.” Enterprise, Business-Process and Information Systems Modeling. Springer, Cham, 2019. 130-138.
[Morrow 2019] M. Morrow, & M. Zarrebini. “Blockchain and the Tokenization of the Individual: Societal Implications.” Future Internet 11.10 (2019): 220.
[Mouloud 2019] K. Mouloud, “Blockchain in the Music Industry: A Study of Token Based Music Platforms” S. Diss. Aalborg University, 2019.
[Soumin 2018] S. Somin, et. al., “Network analysis of erc20 tokens trading on ethereum blockchain.” International Conference on Complex Systems. Springer, Cham, 2018.
[Victor 2019] F. Victor, & B. Lüders, “Measuring Ethereum-based ERC20 token networks.” International Conference on Financial Cryptography and Data Security. Springer, Cham, 2019.
There is some overlap between the concepts of autonomy in software agents and robots; and another overlap between the autonomy of software agents and DAOs. The concept of autonomous software agents has been around for more than 20 years. In 1996, [Franklin 1996] proposed a taxonomy of autonomous agents (biological, robotic or computational) and defined software agents as a type of computational agent (distinguished from artificial life), but further categorized into task specific agents, entertainment agents or viruses. Note that this defines agents as being autonomous; but does not provide a notion of the level of autonomy. In 1999, [Heckman 1999] reviewed potential liability issues arising from autonomous agent designs. Around the same time, [Barber 1999] quantitatively defined the degree of autonomy as an agent’s relative voting weight in decision-making. Reviewing autonomous agents in 2000, [Dowling 2000] considered the delegation of any task to a software agent as raising questions in relation to its autonomy of action and decision, the degree of trust which can be vested in the outcomes it achieves, and the location of responsibility, both moral and legal, for those outcomes; but did not couple responsibility for decisions with levels of autonomy.
In 2003, [Braynov 2003] considered autonomy a relative concept depending on what a user (or another agent) expects from an agent; defining autonomy as a relation including four constituents: (1) The subject of autonomy: the entity (a single agent or a group of agents), which acts or makes decisions; (2) The object of autonomy. The object of autonomy could be a goal or a task that the subject wants to perform or to achieve. It could also be a decision that the subject wants to make; (3) The affector of autonomy: the entity that has an impact on the subject’s decisions and actions, thereby affecting the final outcome of the subject’s behavior. The affector could be the physical environment, another agent (including the user), or group of agents. The affector could either increase or decrease the autonomy of the subject; (4) Performance measure: a measure of how successful the subject is with respect to the object of autonomy. Around that time, [Brazier 2003] was considering whether agents could close contracts, and if so how liabilities might be allocated. While autonomy was seen as a complex relationship involving decision making, and allocation of liabilities was seen as important, liability was not considered a dimension of autonomy. [Braynov 2003]’s object of autonomy included decision making, it also includes broader topics like goals.
In 2017, [Pagallo 2017] provides a historical perspective on the liability issues arising from automation through to autonomous systems, recognizing that historical approaches may be insufficient for the current challenges of technology. Considering levels of autonomy in agents from an ethical perspective, [Dyrkolbotyn 2017] identified 5 “levels” of autonomy to pinpoint where a morally salient decision belongs on the following scale: (i) Dependence or level 1 autonomy: The behavior of the system was predicted by someone with a capacity to intervene; (ii) Proxy or level 2 autonomy: The behavior of the system should have been predicted by someone with a capacity to intervene; (iii) Representation or level 3 autonomy: The behavior of the system could have been predicted by someone with a capacity to intervene; (iv) Legal personality or level 4 autonomy: The behavior of the system cannot be explained only in terms of the systems design and environment. These are systems whose behavior could not have been predicted by anyone with a capacity to intervene; (v) Legal immunity or level -1: The behavior of the system counts as evidence of a defect. Namely, the behavior of the system could not have been predicted by the system itself, or the machine did not have a capacity to intervene. Also around this time; [Millard 2017] was concerned with the need for clarity concerning liabilities in the IoT space. In this period the need for change in existing liability schemes was starting to be recognized, and the level definition of [Dyrkolbotyn 2017]’s autonomy scale based on decisions included notions of legal consequences of decision making.
In 2019, [Janiesch 2019] surveyed the literature on autonomy in the context of IoT agents identifying 20 category definitions for levels of autonomy, based on whether the human or machine is performing the decision making, and 9 different dimensions or types of agent autonomy: interpretation, know-how, plan, goal, reasoning, monitoring, skill, resource and condition. They also identified 12 design requirements for autonomous agents and proposed an autonomy model and language. [Lebeuf 2019] proposed a definition for software (ro)bots as an interface paradigm including command line, graphical, touch, written/spoken language or some combination of interfaces; and also proposed a faceted taxonomy for software (ro)bots based on facets of (ro)bot environment, intrinsic characteristics and interaction dimensions. The boundaries of these different types of software entities have become blurrier again with less consensus on the meaning of level of autonomy in the context of software entities.
Back in 2003, [Braynov2003] had noted that it was obvious that between the lack of autonomy and the complete autonomy there was a wide range of intermediate states that describe an agent’s ability to act and decide independently. A common thread around decision making as a basis for autonomy levels emerges from [Barber 1999], [Dowling 2000], [Braynov 2003] and [Dyrkolbotyn 2017], but not a consensus on a particular metric. The recent recognition of the potential for changes in the legal liability regimes to better reflect software agents brings to mind [Myhre 2019]’s assertion of accountability as a measure of autonomy. Whether through action or inaction, software agents may potentially cause injuries to people or their property. For purely software agents with no cyberphysical aspects, those injuries would have to be informational in nature (e.g. privacy violations, slander etc.). While liability or accountability for autonomous decision making, may not be the only useful dimension for autonomy in software agents, it does have some practical commercial value in quantifying risks thus enabling more commercial activities based on software agents to proceed.
References
[Barber 1999] S. Barber, & C. Martin, “Agent Autonomy: Specification, Measurement, and Dynamic Adjustment” In Proceedings of the Autonomy Control Software Workshop, Agents ’99, pp. 8-15. May 1-5, 1999, Seattle, WA.
[Braynov 2003] S. Braynov, & H. Hexmoor. “Quantifying relative autonomy in multiagent interaction.” Agent Autonomy. Springer, Boston, MA, 2003. 55-73.
[Dowling 2000] C. Dowling, “Intelligent agents: some ethical issues and dilemmas.” Selected papers from the second Australian Institute conference on Computer ethics. Australian Computer Society, Inc., 2000.
[Dyrkolbotn 2017] S.Dyrkolbotn, et.al., “Classifying the Autonomy and Morality of Artificial Agents.” CARe-MAS@ PRIMA. 2017.
[Franklin 1996] S. Franklin, & A.Graesser. “Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents.” International Workshop on Agent Theories, Architectures, and Languages. Springer, Berlin, Heidelberg, 1996.
[Heckman 1999] C. Heckman, & J. Wobbrock. “Liability for autonomous agent design.” Autonomous agents and multi-agent systems 2.1 (1999): 87-103.
[Janiesch 2019] C. Janiesch, et al. “Specifying autonomy in the Internet of Things: the autonomy model and notation.” Information Systems and e-Business Management 17.1 (2019): 159-194.
[Lebeuf 2019] C. Lebeuf, et al. “Defining and classifying software bots: a faceted taxonomy.” Proceedings of the 1st International Workshop on Bots in Software Engineering. IEEE Press, 2019.
[Millard 2017] C. Millard, et. al., “Internet of Things Ecosystems: Unpacking Legal Relationships and Liabilities.” 2017 IEEE International Conference on Cloud Engineering (IC2E). IEEE, 2017.