AI Policies

AI Policy Overview

The content below outlines policy areas and references relevant University of Cincinnati policies that apply to the use of AI and Generative AI within the university's ecosystem. This ensures AI enablement across the institution aligns with the university's broader data governance and information security standards.

These policies are integral to guiding the ethical, secure, and responsible use of AI technologies at the university. For more information, visit the University of Cincinnati's policy page.

  • Policy Name: Data Governance & Classification Policy
  • Relevant AI Application: AI Data Management
  • Following the existing data governance and classification policy helps to ensure that AI models only use data classified and handled according to university standards, safeguarding sensitive information and ensuring compliance with data governance practices.
  • Policy Name: Acceptable Use of Information Technology Policy
  • Relevant AI Application: AI Tools and Resources
  • Following the acceptable use policy helps to regulate the ethical and appropriate use of AI technologies, ensuring they align with the university's IT usage standards, including restrictions on generating or using unethical AI content.
  • Policy Name: Information Security Incident Management & Response Policy
  • Relevant AI Application: AI System Security
  • Following the acceptable use policy helps to regulate the ethical and appropriate use of AI technologies, ensuring they align with the university's IT usage standards, including restrictions on generating or using unethical AI content.
  • Policy Name: Electronic & Information Technology (EIT) Accessibility Policy
  • Relevant AI Application: Inclusive AI Development
  • Helps to ensure that AI tools and applications are accessible to users with disabilities, promoting inclusivity in the development and deployment of AI solutions across the university.
  • Policy Name: Vulnerability Management Policy
  • Relevant AI Application: AI Vulnerability Assessment
  • Addresses potential vulnerabilities in AI systems, ensuring regular assessment and mitigation of risks associated with AI implementations, such as susceptibility to adversarial attacks.