Artificial Intelligence Policy
Date of Policy: This Policy was approved on November 21, 2024.
1. Policy Statement and Scope
The Institute of Corporate Directors (ICD) recognizes artificial intelligence (AI) technologies as transformative tools that can enhance organizational effectiveness while presenting significant governance considerations. This policy establishes the framework for responsible AI adoption and usage across the ICD.
This policy establishes mandatory requirements and guidelines for the ethical and responsible use of artificial intelligence technologies within the ICD.
With this AI Policy, ICD commits to:
• Implementing AI systems that enhance organizational capabilities while protecting stakeholder interests
• Ensuring all AI usage complies with applicable laws and regulations
• Upholding transparency in AI-supported processes and decisions
• Maintaining rigorous data protection and security standards
• Providing appropriate training and support for AI users
• Regularly reviewing and updating AI practices to reflect emerging standards
The safe and ethical deployment of AI technologies is essential to maintaining the trust of our members, employees, and stakeholders while advancing the ICD's mission of fostering excellence in directorship.
This policy applies to:
• All employees, contractors, volunteers, and personnel performing work on behalf of the ICD:
• All AI technologies and applications used for ICD business purposes:
• All locations where ICD work is conducted, including remote environments:
• All data processed using AI systems for ICD purposes:
2. Acceptable Use of AI
The ICD adopts the core principles outlined in Section 2.1 to ensure AI technologies are employed ethically and responsibly. These principles align AI applications with ICD's mission to foster transparency, accountability, and trust, ensuring AI supports both our organizational goals and broader societal impact. While not exhaustive, a list of acceptable and prohibited uses of AI are included in Appendix A.
2.1 Core Principles
2.1.1 Prioritize Human-Centric AI Support
Design AI systems to enhance human capabilities by providing analysis and recommendations that empower users in their roles. AI should serve as a supportive tool, upholding user independence and informed decision-making, without exerting undue influence or replacing human judgment.
2.1.2 Comply with Legal, Regulations and ICD Policies
All AI systems must adhere to applicable laws, including PIPEDA and relevant provincial privacy regulations, with practices tailored to the system's environment. For closed/internal AI systems, ensure access control, and audit trails safeguard data integrity. For third-party AI solutions, prioritize data anonymization, encryption, and strict vendor protocols to protect member information. In all cases, AI use must follow ICD's policies (Data Protection Policy, Privacy Policy, Workplace Discrimination and Harassment Policy, and Code of Business Conduct and Ethics).
2.1.3 Avoid Bias in AI Systems
AI systems must be designed and used to actively prevent biases and ensure fair treatment of all individuals. Be aware of common biases that may be present in AI systems, such as data bias, algorithmic bias, and confirmation bias. Regularly review and evaluate AI-generated outputs for potential biases and inaccuracies, seeking input from diverse perspectives and stakeholder groups. Document and communicate any identified biases and mitigation efforts to relevant stakeholders.
2.1.4 Ensure Transparency and Openness
Deploy AI systems transparently by communicating their role in analysis and recommendations. Disclose AI use in relevant processes, maintain detailed documentation on functionality and limitations, and confirm vendor transparency in AI operations to reinforce accountability and trust.
2.1.5 Intentional Misuse
Intentional misuse of AI systems refers to actions deliberately taken to exploit AI tools in ways that compromise ethical, legal, or moral standards, posing risks to the safety, privacy, or security of individuals or organizations. This includes activities such as AI-based fraud (e.g., phishing scams or identity theft), discrimination (e.g., reinforcing bias within AI outputs), invasion of privacy (e.g., unauthorized collection of personal data), malicious use (e.g., cyberattacks or social engineering), and spreading misinformation. Users are strictly prohibited from engaging in any form of intentional misuse, and any incidents should be reported promptly to a supervisor or appropriate person(s).
2.1.6 Unintentional Misuse
Unintentional misuse of AI systems refers to situations where users, without malicious intent, use AI tools in ways that lead to negative consequences or harm (including copyright infringement). This can occur due to a lack of understanding, insufficient training, or an oversight in the use of AI technology.
2.1.7 Reporting Misuse
Users are encouraged to report any suspected misuse of AI systems, whether intentional or unintentional, to their supervisor or other appropriate person(s). Reports can be made anonymously through ICD’s Whistleblower hotline [ClearView Connects] and will be handled confidentially.
2.1.8 Evaluation and Approval Process
In addition to upholding ICD’s Delegation of Authority, all AI tools must undergo evaluation and receive approval from at least the CAO, the IT department and the Privacy Officer prior to use. Additional approvals may be required from the Senior Management Team, pending scope and impact. This review will assess the tool's functionality, alignment with ICD objectives, security, privacy standards, and vendor reliability. ICD will also conduct or request regular post-deployment evaluations for ongoing compliance and performance.
2.1.9 Continuous Improvement and Monitoring
Ensure all AI systems undergo regular evaluations and updates to align with evolving organizational needs, technological advances, and regulatory changes. Incorporate feedback from users and stakeholders and monitor AI performance to uphold ICD’s ethical and quality standards over time.
2.2 User Responsibilities
2.2.1 Exercise Professional Judgment
Users must exercise professional judgment, remaining mindful of AI's impacts on stakeholders, privacy rights, and ICD's public reputation.
2.2.2 Maintain Clear, Purposeful Use
Employ AI technologies to enhance productivity, support informed decision-making, provide recommendations and generate insights, prioritizing ethical integrity and stakeholder rights.
2.2.4 Training and Support
ICD will provide training, resources, and support to ensure users understand responsible and ethical AI use. Training will cover AI limitations, risks, and best practices. Users are encouraged to consult the IT department or designated officers for guidance on acceptable AI practices.
2.3 Policy Administration
2.3.1 Reviews and Updates
ICD will review this policy at least annually or in response to significant changes in AI technology, regulatory requirements, or organizational strategy. Updates will be communicated organization-wide, accompanied by appropriate training as needed.
Appendix A
Detailed Authorized Uses for AI
1. Document Creation and Enhancement
Document Development: Development of business documents, including presentations, meeting minutes, notes, internal memos, and policy drafts.
Content Generation: Content generation, improvement and personalization through AI-assisted editing, translation, and formatting.
Automated Summarization: Automated summarization and analysis of lengthy documents, articles, and meeting transcripts.
2. Research and Development
• Preliminary Research: Preliminary research and source summarization, including initial research tasks, source identification, and reference material summaries for marketing, educational programming, and event planning.
• Technical Documentation: Technical and content development for drafting and refining process documentation, basic code scripts, and guidance manuals, including foundational content generation for ongoing projects.
• Creative Ideation: Creative ideation and concept generation for initial project concepts, including marketing campaigns, conference themes, and educational initiatives.
3. Member and Stakeholder Engagement
• Personalized Campaigns: Personalized communication campaigns, including AI-tailored email content, event invitations, and newsletters based on member profiles, engagement history, and expressed interests to enhance relevance and connection.
• Feedback Analysis: Survey and feedback analysis to identify trends, sentiments, and areas for improvement through AI-based analysis of feedback from surveys, social media, and event participation, informing member services and program development.
• Automated Responses: Automated responses for member queries using AI-driven chatbots or email tools to manage common inquiries, providing prompt responses and freeing resources for complex queries
4. Marketing and Outreach
• Lead Scoring: Lead scoring and prioritization through AI-based evaluation and scoring of potential member leads based on engagement history, role, industry, and interest level to enable targeted outreach.
• Content Curation: Content curation for social media and web, including AI-generated drafts and suggestions for social media posts and website content based on trending governance topics, ensuring alignment with ICD’s voice and values.
• Event Promotion: Event targeting and promotion by identifying optimal times for event promotion and tailoring invitations to potential attendees based on past engagement, increasing participation and relevance.
5. Education and Professional Development
• Learning Path Recommendations: Automated learning path recommendations, including AI-driven course, webinar, or certification suggestions tailored to members’ professional backgrounds, previous ICD engagements, and industry trends.
• Educational Content Creation: Content creation for educational materials with AI-assisted drafting of course materials, assessments, and interactive content, enabling ICD educators to focus on curriculum design and content validation.
• Competency Tracking: Competency tracking and personalized reminders by monitoring member progress on certifications, ICD.D requirements, and continuing education units, sending automated reminders to ensure timely completion.
6. Operational Efficiency
1. Employee Onboarding and Training: Employee onboarding and training through AI-driven personalized training paths, resources, and automated FAQs, fostering alignment with ICD’s culture and values.
2. Workload Optimization: Workload optimization and task automation to identify and automate routine administrative tasks such as scheduling, reporting, and data entry, allowing teams to prioritize strategic initiatives.
3. Analytics: AI-driven analytics, including predictive analytics, to support decision-making.
7. Other
1. All other uses must be pre-approved by the CAO before proceeding.
Prohibited Uses of AI Systems
1. Fraud and Deception
1. AI-Based Fraud: Using AI to manipulate or deceive individuals or organizations through phishing scams, identity theft, or unauthorized issuance of financial products.
2. Spreading Misinformation: Leveraging AI to create, distribute, or amplify false or misleading information intended to deceive or mislead.
2. Privacy and Data Misuse
3. Invasion of Privacy: Collecting, accessing or disseminating personal data through AI without informed consent, violating privacy rights and ICD's data protection standards.
4. Unauthorized Surveillance: Using AI for unauthorized tracking of individuals' activities, locations, or communications, except for electronic monitoring within scope of the Employee Electronic Monitoring Policy.
5. Privacy Violations: Exposing personal, confidential and/or sensitive information by sharing it with public AI systems without prior approval from the CAO, addressing proper anonymization, encryption, or adherence to privacy standards.
6. Proprietary Data Exposure: Sharing ICD-sensitive, proprietary, and/or confidential information with AI systems without prior approval from the CAO.
3. Discrimination and Manipulation
7. Bias and Discrimination: Using AI outputs that reinforce bias or result in discriminatory treatment toward individuals or groups, especially when bias is embedded in AI algorithms or training data.
8. Manipulative Personalization: Exploiting AI to hyper-target and influence individuals' behavior in ways that could cause harm.
4. Cybersecurity Threats
9. Malicious Use: Deploying AI in cyberattacks, such as phishing, social engineering, or exploiting system vulnerabilities.
5. Intellectual Property and Content Misuse
10. Copyright Infringement: Generating outputs that violate copyright protections, resulting in unauthorized use of third-party intellectual property.
11. Intellectual Property Theft: Using AI to replicate, scrape, or misuse proprietary content, including copyrighted materials, trademarks, or trade secrets, without permission.
12. Inappropriate Content: Creating or sharing AI-generated content that is inappropriate or illegal, as defined in Employee Electronic Monitoring Policy.
6. Harassment and Abuse
13. Automated Harassment: Utilizing AI to generate or spread abusive, harassing, or defamatory content, whether through bots or other automated tools, causing distress or damaging reputations.