By Damodara “DP” Battula, Principal Cloud Architect and Lead Data Scientist
Every day, countless systems work in unison behind the scenes, powering our lives. From the electricity that lights our homes to the internet that provides global connectivity, our critical infrastructure is integral to our economy, our security, and our way of life.
The Department of Homeland Security (DHS) plays a crucial role in protecting the 16 critical infrastructure sectors of the United States. It serves as a central hub for coordinating and collaborating with federal, state, local, and private sector partners to share information, identify vulnerabilities, and develop strategies to protect critical infrastructure. By collecting, analyzing, and disseminating information on threats and vulnerabilities, DHS empowers stakeholders to make informed decisions and take appropriate actions to protect systems and assets.
Artificial Intelligence (AI) offers immense potential to enhance the security and efficiency of our critical infrastructure, from predicting and preventing cyberattacks to optimizing energy distribution. However, it also introduces new risks, as malicious actors attempt to exploit AI to launch sophisticated attacks or manipulate critical systems. AI systems themselves can be vulnerable to attacks, raising concerns about data privacy and security.
Framework at a Glance
On November 14, 2024, DHS, in consultation with the Artificial Intelligence Safety and Security Board, published a new Framework that evaluates the shared and separate responsibilities of key stakeholders to the secure operation of our critical infrastructure relative to AI. The Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure outlines a set of voluntary responsibilities for cloud service providers, AI developers, infrastructure owners and operators, civil society, and the public sector. It evaluates these roles across five critical areas: securing environments, responsible AI development, data governance, safe deployment, and ongoing performance monitoring.
The new framework complements a cadre of best practices and guidance issued by the federal government including Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence; OMB M-24-10 Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence; the Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; and additional guidance focused specifically on AI safety and security relevant to critical infrastructure. This Framework provides both technical and process recommendations, aiming to foster the safe, secure, and trustworthy deployment of AI across the critical infrastructure sectors.
Also Read: Advancing Trustworthy AI/ML: Profile-based Synthetic Data Generation
Key Takeaways
In reviewing the Framework, a few key takeaways emerge.
1. Emphasis on Stakeholder Engagement
The new Framework lays immense importance on stakeholder engagement from the offset. Application of the Framework drives a collaborative approach to ensure all perspectives are considered and best decisions are made. The voluntary guidance offers recommended approaches, including a detailed matrix, aimed at clearly defining and understanding the roles and responsibilities of each entity involved in the ecosystem. Both the public sector and civil society as called out as key participants in this discourse, reflecting the importance of advocating for privacy and ensuring regulatory oversight.
2. The Need for Advanced Skills
The government and industry alike are gearing up to leverage the transformative potential of AI. As such, both must equip themselves with advanced skills in AI and machine learning. Staying one step ahead of adversaries (who are increasingly leveraging AI in their attacks) requires ongoing focus on the quickly evolving AI landscape. The Framework emphasizes the need for research focused on understanding the relationship between AI model architecture and real-world outcomes. By investing in research, we gain valuable insights into the safe and responsible deployment of AI.
3. Human-centric, Responsible AI
Efforts to improve the ability to protect and secure our critical infrastructure must continue to balance ethical considerations. Use of AI systems must align with human-centric values and use secure by design principles. AI model developers should create models aligned with human values, prioritizing helpfulness, accuracy, fairness, and transparency. Furthermore, developers should identify capabilities associated with autonomous activity, physical and life sciences, cybersecurity, and other capabilities that could impact critical infrastructure when deployed in high-risk contexts. Organizations should incorporate human oversight in AI decision-making processes, especially for critical infrastructure applications.
4. Effective Governance
The guidance underlines a comprehensive set of strategies for establishing effective governance structures for the management of high-quality AI-projects. It advocates for a strong governance model including a privacy framework and ethical guidance as core to any AI project. Given the complexity of the ecosystems associated with the 16 critical infrastructure sectors and the risks associated with their compromise, governance must be established early and evaluated for compliance and efficacy on a continuous basis.
5. Importance of Procurement Processes
The Framework stresses the need for thorough vetting, due diligence, and adequate security measures when procuring AI solutions. It places responsibility firmly on critical infrastructure owners and operators to ensure that AI solutions developed are tested, evaluated, validated, and verified by operational and domain experts. Where developers must apply secure by design principles, organizations that procure and implement that software should prioritize a “secure by demand” approach, rigorously assessing both enterprise and product security to guarantee adherence to cybersecurity, privacy, and data integrity standards.
Conclusion
To sum up, understanding and implementing the new DHS guidance is crucial as it offers a strategic roadmap for AI programs that intersect with our nation’s critical infrastructure. The Framework underlines the shift towards more collaborative, technically skilled, and ethically driven management of AI projects. It reinforces the importance of establishing a viable, risk-based action plan to navigate the future of our critical infrastructure.
Unissant works with federal agencies to envision AI use cases, pilot and implement AI-driven systems, and advance the maturity of AI usage at scale by establishing AI Centers of Excellence. Learn more about our approach to AI at Advanced Intelligence Services | Unissant.