AI Framework
The Octus AI Framework
AI at Octus – policies, guidelines and practices
Our AI-integrated and customer-centric architecture revolutionizes credit market intelligence and accelerates informed decision-making. Responsibly embedded. Efficiently delivered.
FAQs
Our principles are:
- Fairness and non-discrimination: Our AI models are built to be unbiased and avoid discriminatory outcomes based on factors like race, gender, or income.
- Transparency and explainability: We strive for transparency in our AI systems. We aim to explain how decisions are made, allowing for human oversight and intervention if needed.
- Accountability: We take ownership of our AI models. We have clear roles and processes for development, deployment, and monitoring to ensure responsible use.
- Privacy and security: We prioritize user privacy and data security. We comply with all relevant regulations and implement robust security measures to protect user information.
- Human oversight: Humans remain in control. AI is a tool to augment human decision-making, not replace it.
And our practices include:
- Diverse development teams: We build diverse teams of engineers, data scientists and developers to identify and address potential biases early on.
- Fairness testing: We employ rigorous fairness testing throughout the development process to identify and mitigate bias in datasets and algorithms.
- Model explainability tools: We use explainable AI techniques, such as TruLens and Variable Importance to understand how models reach conclusions. This allows for human review and intervention if necessary.
- Human-in-the-loop systems: We design systems where humans can review and override AI decisions, particularly in critical areas.
- Regular audits and monitoring: We conduct regular audits to identify and address any emerging bias or performance issues in deployed models.
We have a well-defined ethical framework guiding AI development and deployment. This framework is based on industry best practices and regulatory guidelines. It outlines principles such as fairness, transparency, accountability, integrity and security.
The Octus AI ethical framework consists of eight pillars:
1 – Fairness: AI solutions should be designed to reduce or eliminate bias against individuals, communities and groups.
2 – Transparency: AI solutions should include responsible disclosure to provide stakeholders with a clear understanding of what is happening in each solution across the AI lifecycle.
3 – Explainability: AI solutions should be developed and delivered in a way that answers the questions of how and why a conclusion was drawn from the solution.
4 – Accountability: Human oversight and responsibility should be embedded across the AI lifecycle to manage risk and comply with applicable laws and regulations.
5 – Data integrity: Data used in AI solutions should be acquired in compliance with applicable laws and regulations and assessed for accuracy, completeness, appropriateness and quality to drive trusted decisions.
6 – Reliability: AI solutions should consistently operate in accordance with their intended purpose and scope and at the desired level of precision.
7 – Security: Robust and resilient practices should be implemented to safeguard AI solutions against bad actors, misinformation or adverse events.
8 – Safety: AI solutions should be designed and implemented to safeguard against harm to people, businesses and property.
We mitigate bias through:
- Data quality: We prioritize high-quality data with randomized and diverse representation to minimize bias in training datasets.
- Algorithmic choice: We carefully select algorithms less prone to bias and continuously evaluate new approaches for fairness.
- Human review and feedback: We incorporate human review loops and feedback mechanisms to identify and rectify biased outcomes during the development and deployment phases.
Ensuring explainability in our AI decisions is paramount. Below are the techniques with which we approach the issue:
- Feature importance: We identify the data points (features) that have the most significant influence on the model’s decision. This helps us understand which factors play a key role in its conclusions.
- Partial dependence plots: These plots visualize the impact of individual features on the model’s output. It allows us to see how a specific feature value can influence the final outcome.
- Counterfactual explanations: This technique explores “what-if” scenarios. We can see how a slight change in an input might affect the model’s prediction. This helps users understand the model’s reasoning.
- Model-Agnostic Explainable AI (XAI) methods: We utilize techniques like LIME (Local Interpretable Model-Agnostic Explanations) to create simpler, human-interpretable models that mimic the behavior of the complex AI model for specific predictions.
By leveraging the above mentioned techniques, we can provide insights into how AI models arrive at specific conclusions:
Document classification. For example, we can show which factors like headline, legal jargon, or financial jargon had the most significant impact on the specific credit related topic.
Risk index. For example, we can highlight the specific going concern language and covenants that amplify default risk for a company.
There are limitations to explainability, especially with complex models:
- Black box nature: Highly complex models can be intricate webs of connections, making it challenging to fully understand the reasoning behind every decision.
- Data-driven biases: If the underlying data has biases, the model might inherit them, making it difficult to explain biased outcomes.
- Hallucinations: In GenAI models, despite several in-house guardrails, on rare occasions, responses may contain a confusion of coreference resolution when multiple entities or persons are mentioned in a complicated narrative.
- Numerical calculations: While LLMs are effective at textual tasks, their understanding of numbers typically comes from narrative context, and they lack deep numerical reasoning or flexibility of a human mind to carry out calculations and perform human-like interpretations and complex financial or legal judgements.
- Human expertise: We rely on human expertise to interpret the explanations provided by signals observed from the source data. Subject matter expert (SME) groups from business along with our data scientists examine the results and ensure they align with ground truth.
- Documentation and transparency: We document the explainability methods used and the limitations of the model. This promotes transparency and helps users understand the level of certainty associated with the AI’s conclusions.
Our AI systems rely on a variety of data sources to train and operate effectively, which include:
- Internal data: We leverage anonymized historical product usage data. This data provides valuable insights to support our product and service offering improvements. Our data retention policy ensures data doesn’t contain any personally identifiable information (PII).
- External data: We may incorporate external datasets, anonymized and aggregated, on market trends, economic indicators and industry benchmarks. This enriches our models with a broader perspective.
Data quality is paramount. We have stringent measures in place to ensure:
- Accuracy: We implement data validation and cleaning techniques to minimize errors and inconsistencies in the data.
- Completeness: We strive for comprehensive datasets to avoid biases caused by missing information.
- Relevance: We select data that aligns with the specific purpose of the AI model being trained.
- Fairness: We analyze the data for potential biases and take steps to mitigate them. This might involve adjusting data collection practices or employing debiasing algorithms.
Data privacy and security are top priorities. We have robust measures in place, which include:
- Data anonymization: We anonymize all data before using it for training or operation. This protects user privacy and ensures compliance with data privacy regulations.
- Access controls: We implement strict access controls to restrict access to sensitive data.
- Security protocols: We adhere to industry-standard security protocols to safeguard data from cyberattacks and unauthorized access.
- Regular audits: We conduct regular audits of our data security practices to identify and address any vulnerabilities.
In addition, we strive for transparency regarding data usage. We provide users with clear information about how their data is used in our AI models. Additionally, we may offer users options to control or restrict the use of their data for specific purposes.
Rigorous testing and validation are cornerstones of our AI development process, especially for credit models that rely on credit data and news. Below are our testing processes with which we ensure the reliability and robustness of our AI systems:
- Data splitting: We split our data into training, validation and testing sets. The training set teaches the model. The validation set helps fine-tune hyperparameters to avoid overfitting; and the unseen testing set provides an unbiased assessment of the model’s generalizability.
- Performance metrics: We employ a battery of performance metrics relevant to the specific application. For credit models, this might include accuracy, precision, recall, F1 score, and Area Under the ROC Curve (AUC-ROC). These metrics tell us how well the model distinguishes between creditworthy and non-creditworthy borrowers.
- Stress testing: We stress test our models with extreme or unexpected data points to assess their resilience in unforeseen situations. This helps us identify potential weaknesses and improve the model’s ability to handle edge cases.
- Backtesting: For credit models, we can backtest the model’s performance on historical data to see how it would have performed in the past. This helps assess the model’s effectiveness and identify potential biases.
- Human-in-the-loop testing: We integrate human review into the testing process. Domain experts evaluate the model’s outputs and identify cases where the model might be making inaccurate or unfair decisions. This human oversight mitigates potential risks.
We approach this through:
- Model monitoring: We continuously monitor the performance of deployed models in production. This allows us to detect any performance degradation or shifts in the data distribution that might affect the model’s accuracy over time.
- Model retraining: Based on monitoring, we may retrain the model with new data to maintain its accuracy and effectiveness.
- Version control: We maintain a clear version control system for our models. This allows us to track changes, revert to previous versions if necessary, and ensure consistency across deployments.
We approach this through:
- Scenario testing: We develop scenarios that represent potential edge cases or unexpected situations. We test the model’s behavior in these scenarios to identify and address potential issues.
- Human oversight: We maintain human oversight capabilities within the system. Humans can intervene in critical situations or when the model’s output is deemed unreliable.
We believe in responsible AI when dealing with sensitive credit data and news. Here’s how we ensure a healthy balance between AI power and human oversight:
- Human-in-the-loop approach: We primarily follow a “human-in-the-loop” approach. Our AI models generate outputs, but our experts have the final say in credit decisions.
- Expert review: Subject matter experts (SMEs) review the AI’s outputs, considering the unique circumstances and supporting evidence. This mitigates potential bias from the model and ensures sound judgment.
We approach this through:
- Explanation and transparency: Users can track citations and underlying sources for AI-generated outputs. This allows them to understand the factors influencing the decision and identify potential areas for discussion.
- Dispute process: We have a clear dispute process in place. Users can contest AI outputs if they believe there are inaccuracies or extenuating circumstances not captured by the model. Our financial and legal experts will then review the case and make a final judgment.
We approach this through:
- Clear escalation channels: We provide clear channels for users to escalate concerns about AI outputs. This allows for swift human intervention when necessary.
- Error correction and feedback loop: We have a feedback loop in place. If human experts identify errors in the AI’s outputs, these get logged and fed back into the model training process. This helps us continuously improve the model’s accuracy and fairness.
- Algorithmic bias monitoring: We actively monitor our AI models for potential biases due to possible concept drift, covariate drift and context drift, that might creep in over time. We can then take corrective measures, such as data debiasing techniques or model retraining, to address any identified biases.
We value customer feedback and take it seriously, especially when it comes to our AI systems that utilize credit data and news. We take a customer-centric approach and provide multiple channels for customers to provide feedback or report issues related to our AI systems:
- In-app feedback forms: We integrate user-friendly feedback forms directly within our applications. This allows users to conveniently report issues or share their experience with AI-generated outputs.
- Dedicated customer support: We have a customer success team educated to address concerns about AI decisions. They can gather details, escalate issues and provide clear explanations with help of the AI team.
Here’s how we analyze feedback and evaluate action:
- Categorization and analysis: All customer feedback is categorized and analyzed to identify trends and recurring issues by our Product teams. This helps us pinpoint areas needing improvement within our AI models.
- Root cause analysis: We conduct root cause analysis to understand the reasons behind customer concerns. This might involve reviewing specific data points or investigating the model’s decision-making process for a particular case.
- Prioritization and action: Based on analysis, we prioritize issues and take appropriate actions. This could involve model retraining, bias mitigation techniques, improved explanations or adjustments to user interfaces for better transparency.
Here’s what our feedback loop for improvement looks like:
- Transparency and communication: We strive to be transparent with customers about how their feedback is used. We may share general insights gleaned from feedback or provide updates on how their input has led to improvements.
- Continuous learning: Customer feedback is a valuable source of real-world data. We incorporate this data into our AI development and improvement process. This allows our models to continuously learn and adapt to better serve customer needs
We have adopted the NIST Cybersecurity Framework (CSF), a comprehensive and widely-recognized set of guidelines and best practices for managing cybersecurity risks. Adhering to the NIST CSF helps ensure that we maintain a robust and holistic approach to security across our organization.
Additionally, we are currently undergoing an SOC 2 Type I audit, which evaluates our system’s design against the Trust Services Criteria for security, availability, and confidentiality. This independent assessment validates that our AI platform has the necessary controls in place to meet these critical security principles.
Yes, regulatory compliance is a top priority for our AI system. We closely monitor and align with industry-specific standards and guidelines related to AI development, deployment and use. This includes adhering to best practices around data governance, model transparency, fairness and accountability.
Our SOC 2 audit further demonstrates our commitment to meeting strict security and compliance requirements, giving our customers added assurance that our AI system operates within necessary guardrails.
We have implemented comprehensive data protection measures to fully comply with applicable privacy regulations like GDPR and CCPA. This includes obtaining proper consents, providing clear disclosures about data collection and use, and honoring data subject rights.
Our AI system is designed with privacy by default, ensuring that data is collected, processed, and stored securely and only for legitimate purposes. We also conduct regular privacy impact assessments to identify and mitigate potential risks.
Data protection is a core part of our SOC 2 audit, which attests to the presence and effectiveness of our privacy controls.
We employ a defense-in-depth approach to secure our AI system, with multiple layers of protection including:
- Encryption of data in transit and at rest
- Strict access controls and authentication mechanisms
- Continuous monitoring and logging to detect anomalous activities
- Regular vulnerability scanning and penetration testing
- Security awareness training for all staff
Our adherence to the NIST CSF ensures that we are implementing security best practices across all critical functions – Identify, Protect, Detect, Respond, and Recover.
We take a proactive approach to identifying and mitigating risks to our AI system. This includes:
- Conducting thorough security testing and code reviews during development
- Implementing strong input validation and output filtering to prevent common attacks like SQL injection or cross-site scripting
- Regularly monitoring for new AI-specific threats and vulnerabilities
- Applying the principle of least privilege to limit the potential impact of any compromise
- Building in safeguards against misuse, such as rate limiting and strict usage policies
Our upcoming SOC 2 report will attest to the design effectiveness of these AI security controls.
Yes, we have a comprehensive incident response plan that is regularly tested and updated. Our plan outlines clear roles and responsibilities, communication protocols and step-by-step procedures for containing, investigating and recovering from security incidents.
We also have pre-established relationships with external incident response experts and legal counsel to ensure we can quickly mobilize assistance if needed.
Incident response is a key area assessed under our SOC 2 audit, providing assurance that we have the right capabilities to minimize harm during adverse events.
Contact Us
Want to elevate your decision-making in the credit market landscape? Reach out today to learn about Octus solutions to meet your needs.