KEI Comments in response to the Office of Science and Technology Policy (OSTP) Request for Information (RFI), regarding National Priorities for Artificial Intelligence, as noticed on May 23, 2023 in the Federal Register, 88 FR 34194.
July 7, 2023
Knowledge Ecology International
Advocacy Group
James Love, james.love@keionline.org
Knowledge Ecology International (KEI) offers the following comments on this Request for Information (RFI) on National Priorities for Artificial Intelligence. The RFI includes a large number of questions, many overlapping. Our comments are grouped under the major headings of the RFI, but not for each of the 29 questions.
Protecting rights, safety, and national security:
- Artificial intelligence (AI) is and can be used to monitor and address abusive practices by AI. There are plenty of examples where AI is used to deal with fraud and other abusive practices, such as email spam filters, monitoring of credit card fraud, plagiarism checkers, detection of social media bots, credit ratings, automated takedowns of infringing copyrighted materials, income tax audits, and countless other areas. In many of these areas, the initial AI decisions have flaws that have negative impacts on the public, and one challenge is to provide affected persons with meaningful opportunities to challenge and correct harmful errors. When operating at scale, the costs of reviews can be substantial, particularly when trained humans are expected to evaluate disputes. There are instances in which AI itself can play a role in dispute resolution, but here in some cases, an AI service that plays a role in dispute resolution would benefit from standards for transparency and an acceptable governance structure, one that would provide different stakeholders the opportunity to audit and influence its operations, and include periodic audits and reviews of its performance.
- Avoiding excessive trade secret or confidential business information norms in national legislation and in trade agreements, will be important to ensure there is the possibility to audit and evaluate concerns about the biases, mismatched objectives and other issues in AI services.
-
The Administration should begin negotiations to eliminate or at the least modify the US-Mexico-Canada trade agreement (USMCA), Article 19.16 on Source Code, which currently reads:
-
Article 19.16: Source Code
1. No Party shall require the transfer of, or access to, a source code of software owned by a person of another Party, or to an algorithm expressed in that source code, as a condition for the import, distribution, sale or use of that software, or of products containing that software, in its territory.
2. This Article does not preclude a regulatory body or judicial authority of a Party from requiring a person of another Party to preserve and make available the source code of software, or an algorithm expressed in that source code, to the regulatory body for a specific investigation, inspection, examination, enforcement action, or judicial proceeding, subject to safeguards against unauthorized disclosure.
The USMCA Article 19.16 on source code is too restrictive regarding how governments will want to force greater transparency of source code and algorithms, not only for some narrow regulatory, enforcement or judicial body, subject to safeguards against unauthorized disclosure, but for the general public, without limitations on access in some cases, or with audit-like non government bodies. Our concerns about this have been raised in several negotiations, including but not limited to the USMCA, but were ignored, and lobbying by a handful of technology firms gave us these undemocratic restrictions on what legislatures can do in the USMCA, and other agreements that the US might join later, such as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), Article 14.17: Source Code.
- In some areas where there is an interest in attribution and remuneration or compensation from the use of AI training data for artistic works, better metadata, and better and more global standards for metadata, and incentives to improve and curate the metadata, can be useful.
- AI is being used to develop new weapons, and will raise a threat to the public that such weapons will be used by humans against humans, if not by machines against humans. The second amendment of the US Constitution has been used by some courts to give expansive rights of citizens to own and use weapons that have no role in hunting or even self defense.
Advancing equity and strengthening civil rights:
- Self driving vehicles are going to be, at some point, a liberating technology for persons who are not able or allowed to drive themselves.
- Some elderly people struggle to deal with the increasing complexity of online government, commercial and financial transactions, and attempts to defraud and otherwise harm them. AI will make the attempts to defraud and harm people more dangerous, but AI can also be used to help people navigate the complexity of online transactions and deal with fraud.
- AI can lower the costs of the development of new medical technologies, but unless policy makers rethink the structure of incentives, unequal and inequitable access can be an unwanted outcome. If policy makers implement incentive systems that delink research and development (R&D) costs from product prices or the grant of monopolies, there can be greater equity in terms of access. To delink incentives from product prices, it is useful to have greater transparency of R&D costs at each stage, and also to have useful and accurate metrics of the outcomes of medical interventions.
- AI legal services can potentially expand access to justice and legal services. Governments may want to create certifications of qualified legal services, but it’s not obvious this is a good idea at this point.
- To the extent that discrimination is based upon ignorance and bias that does not have an empirical foundation, more neutral data driven decisions may have a positive impact on communities facing discrimination.
Bolstering democracy and civic participation:
- AI can be used to make government operations and regulatory processes more transparent and accessible. AI could be used to answer queries about government operations and provide access to data, in ways that go beyond what a human staffed service can realistically provide today. Like all of these AI services, building in transparency, and making audits and evaluations possible and trusted will be important.
Promoting economic growth and good jobs
- The impact of AI on employment and incomes will be a very significant concern to some. Beginning to think about what this might look like should be a priority. It is difficult to imagine let alone be certain about the areas where AI services can replace existing jobs, but there are some areas, in the future, where AI services are likely to replace highly trained professionals, if not wholly, certainly for some activities, including many services now provided for medical doctors, lawyers, accountants, editors, news reporters, software developers, artists, musicians, performers, audiovisual script writers, financial analysts and traders, educators, architects, biomedical scientists, airline pilots, etc. In some cases, the AI services will be a tool that a human uses to do its work with improved productivity, and unknown consequences concerning conformity, changes in forms or higher or lower quality outputs.
- In theory, it is possible for society to compensate and retrain workers who experience adverse shocks from changes in technology or business models. In practice, this is very hard to do, and often there is no actual effort to do so. If AI creates waves of changes in the demands for certain types of skilled labor, it will be disruptive, and create deep resentments about the overall fairness of society.
- Generative AI services may lead to new and unwanted concentrations of power and wealth. To the extent that such concentrations are related to control over data, including but not limited to training data, policy makers may want to rethink how data is stored, accessed and controlled. In the areas of science, dataspaces are a way of having decentralized management of data, with a federated approach that allows system-wide queries of data, while allowing safeguards on such issues as privacy, and avoiding the creation of monopolies. The management and governance structure of these implementations is important, and involves political and well as technical and economic considerations. When data is highly concentrated, governments need to take a more expansive approach to the essential facilities doctrine, and find ways to broaden access, if not to entire datasets, to queries that can make services more competitive and innovative.
Other comments that you would like to provide to inform the National AI Strategy that are not covered by the questions above?
- Efforts to require consent, attribution or compensation for training data have different rationales and different consequences depending upon the subject matter. It’s one thing to require consent or allow opt-out for screen plays, works of visual art or or music performances, and another to do this in the context of biomedical science. The default on using data for training AI should be fair use and/or freedom to operate, and exceptions should be limited, not the other way around.
- The valuations of some technology companies are astounding, including companies that pay little or no income taxes. Policy makers should consider taxes on asset valuations for firms that have very high valuations, unrelated to physical assets. Privately held companies can self assess their value, announcing a price at which anyone can buy the company. The tax can be based upon the percentage of the company’s valuation that corresponds to its operations in the United States compared to the rest of the world, and applied regardless of where a company claims as its residence.
- The impact of AI on the distribution of income remains to be seen, but there are reasons to be concerned. If over time AI services eliminate or reduce the employment in highly skilled jobs, and the inputs to products and services are primarily provided by machines and data, the wealth generated by the services or products will largely accrue to the owners of the machines and data. One consequence of labor making smaller claims on company incomes is greater inequality. Ownership in companies is more unequal than the distribution of incomes from wages. One can imagine a generation of very high income families and sovereign wealth funds, including from non-US sources, accumulating the means to own AI services at such a rate that inequality becomes worse, and at some point, unacceptable politically. For this reason, and others, policy makers should explore the role of alternative ownership models, including public benefit corporations, mutual ownership, worker- or consumer-owned cooperatives or non-profit firms, and the policies that would allow such entities to succeed and prosper.
- It’s a big world, and there is a race to develop AI services. It will take a while to determine how regulatory obligations and other norms can be managed across borders.
- While there is pressure for policy makers to make policy for AI, some prudence is warranted. The massive number of consultations are useful, but governments should not be in a rush-to-regulate mode, and if there are pressing needs in some areas, early interventions should probably be narrow, providing the flexibility to revise thinking over time, as we learn and understand more about the new technology.