Biden Administration Calls for Disclosure of Excessive Power Consumption

In a move aimed at bolstering national security and oversight of artificial intelligence (AI) development, the Biden administration is reportedly considering an executive order that would compel cloud computing companies to report excess power usage to the U.S. government. This development, first reported by Semafor on September 27, has sparked discussions about the potential implications of such a policy shift on the tech industry and national security.

The proposed executive order, which is still in the works and subject to change, would task the Commerce Department with drafting rules mandating that cloud computing giants like Microsoft, Google, and Amazon disclose instances when a customer purchases computing resources exceeding a specified threshold. This initiative mirrors the “know-your-customer” policies established in the banking sector to combat money laundering and illegal financial activities, requiring firms to report transactions exceeding $10,000 in cash.

However, the focus of this proposed policy is not financial transactions but rather the allocation of computational resources. The primary objective is to create a system that allows the U.S. government to proactively identify potential AI-related security threats, particularly those originating from foreign entities. For instance, if a company based in the Middle East were to harness significant computational power from Amazon Web Services for building a powerful large language model, such as ChatGPT, the reporting requirement would serve as an early warning mechanism for American authorities.

This policy proposal signifies a significant step toward treating computing power, a critical component of AI systems, as a national resource. Activities ranging from Bitcoin mining to video game development and AI model training all demand substantial computational resources. The forthcoming rules aim to provide the U.S. government with the means to discern when certain actors, including foreign companies, are harnessing this computational power for AI projects that could potentially pose security risks.

However, while the policy’s intentions are clear, there are concerns regarding its implementation and potential unintended consequences. Semafor’s reporting highlights that, although the policy is intended to regulate AI development, it could inadvertently affect non-AI applications, such as video game development and Bitcoin mining, which also require significant computing resources. The policy’s quantity-based approach to usage monitoring might struggle to differentiate between these various applications and could inadvertently stifle innovation in unrelated industries.

Furthermore, it’s important to note that AI development is a rapidly evolving field, and the computational power required for different AI tasks can change over time. Large language models (LLMs), which currently demand substantial computational resources, may become more efficient in the future, potentially rendering the defined threshold obsolete. Additionally, some AI tools, like facial recognition algorithms, already function with minimal computational power, raising questions about the policy’s ability to accurately target potential AI threats.

Read more:

Join us on Telegram

Follow us on Twitter

Follow us on Facebook

Follow us on Reddit

You might also like