Safety & Security

Our commitment to responsible AI and protecting your data.

AI Safety & Risk Mitigation

Mitigating Bias

AI models learn from vast amounts of text and images from the internet, which unfortunately contain human biases. If not addressed, a model might generate biased content. We are actively working to combat this by building diverse and representative evaluation datasets, and continuously testing for and measuring these types of biases. When we find them, we use techniques like data augmentation and fine-tuning to reduce their impact, striving to create models that are fair and equitable for all groups.

Risks & Limitations

AI is not perfect. Our models can make mistakes, generating plausible but incorrect information (a phenomenon known as "hallucination"). Despite our safety measures, there's a risk that models could generate biased or harmful content. We are transparent about these limitations and encourage users to critically evaluate all outputs, especially for high-stakes applications.

Harmful Content Reduction

We employ a multi-layered approach to reduce the generation of dangerous, hateful, or explicit content. This includes classifiers to filter prompts and outputs, and constant red-teaming exercises where we simulate adversarial attacks to find and fix vulnerabilities before they can be exploited.

Responsible Use Policy

Our terms of service strictly prohibit using our models for illegal activities, generating hate speech, harassment, or misinformation campaigns. We enforce these policies through technical monitoring and human review, and will suspend accounts that violate our safety principles.

Our Security & Privacy Policy

Data Handling and Protection

We are committed to protecting your data. We employ industry-standard security measures, including encryption at rest and in transit, to safeguard your information against unauthorized access, disclosure, or alteration. Our infrastructure is designed for security and resilience.

Data Privacy & Training

We do not train our models on user prompts submitted to our API. To monitor for misuse and abuse, we retain API data for up to 30 days before deletion. For businesses with stricter data privacy needs, a Zero-Data Retention option is available via our Enterprise plan, which ensures no prompts or outputs are logged.

Inference Provider Privacy

We partner with trusted, large-scale inference providers. We contractually prohibit them from training their own models on our users' data or retaining any data passed through our API. Your privacy is paramount throughout the entire process.

Data Retention

Outside of our zero-retention API, we only retain other data, such as account information, for as long as necessary to provide our services or as required by law. We believe in and practice data minimization.