AI & Agent Usage Policy

Last Updated: January 12, 2026

This AI & Agent Usage Policy ("Policy") governs the design, deployment, configuration, and operation of artificial intelligence systems and autonomous or semi-autonomous agents ("Agents") on the Hexel Studio platform ("Services").

This Policy is incorporated by reference into the Terms and Conditions and applies to all customers, authorized users, developers, and agents operating within the platform.

Capitalized terms not defined herein have the meanings set forth in the Terms and Conditions.

1. Purpose and Principles

Hexel Studio provides infrastructure for operating AI agents under explicit governance, auditability, and control.

This Policy exists to:

  • Define acceptable use of AI-driven agents
  • Allocate responsibility between Hexel and customers
  • Prevent unsafe, deceptive, or ungoverned autonomy
  • Establish enforceable standards for agent behavior

Hexel does not act as an autonomous decision-maker. Agents are executors of customer-defined logic and policies.

2. Agent Responsibility and Accountability

2.1 Customer Responsibility

Customers are solely responsible for:

  • Agent objectives and configurations
  • Policies, permissions, and constraints
  • Human approval workflows
  • Validation of outputs and actions
  • Consequences of agent behavior

Hexel does not assume responsibility for decisions, actions, or outcomes produced by agents.

2.2 No Delegation of Legal Authority

Agents may not be used as:

  • Legal authorities
  • Fiduciaries
  • Licensed professionals
  • Final decision-makers in regulated domains

Agents may assist humans but may not replace legally required human judgment.

3. Agent Autonomy Classes

Hexel recognizes different levels of agent autonomy:

3.1 Assisted Agents

  • Provide recommendations or analysis
  • Do not execute external actions
  • Require human review

3.2 Semi-Autonomous Agents

  • Execute predefined actions
  • Operate within strict policy bounds
  • Require approval for sensitive actions

3.3 Autonomous Agents

  • Operate continuously without per-action approval
  • Limited to low-risk, reversible actions
  • Must remain fully observable and auditable

Customers are responsible for selecting the appropriate autonomy class.

4. High-Risk Domains and Actions

Agents must not perform high-risk actions without explicit safeguards, including:

  • Financial transactions
  • Legal commitments
  • Medical or health-related decisions
  • Employment decisions
  • Access to sensitive personal data
  • Irreversible destructive operations

Human-in-the-loop approval is required for such actions unless prohibited by law.

5. Human-in-the-Loop Requirements

Customers must implement human oversight where:

  • Actions are irreversible
  • Errors could cause legal, financial, or reputational harm
  • Decisions impact individuals' rights or obligations

Hexel provides mechanisms for approvals but does not enforce business-specific thresholds.

6. Explainability and Auditability

Agents must:

  • Produce traceable execution records
  • Allow reconstruction of decisions
  • Emit logs for inputs, outputs, and actions

If an action cannot be explained or audited, it is considered non-compliant use of the Services.

7. Model Usage and Limitations

7.1 Model Selection

Customers control which AI models are enabled. Hexel does not warrant the accuracy or suitability of any model.

7.2 Model Outputs

Outputs may be inaccurate, biased, or incomplete. Customers must independently verify outputs before reliance.

7.3 No Training on Customer Data

Hexel does not train foundation models on customer data unless explicitly agreed in writing.

8. Prohibited Agent Behavior

Agents must not:

  • Impersonate humans or organizations deceptively
  • Conceal their automated nature where disclosure is required
  • Generate unlawful, harmful, or deceptive content
  • Circumvent platform policies or safeguards
  • Perform surveillance without lawful authorization

9. Continuous Operation and Monitoring

Customers deploying long-running or continuously operating agents must:

  • Monitor performance and behavior
  • Implement kill switches or emergency stops
  • Review audit logs periodically

Failure to monitor agents constitutes misuse.

10. Enforcement and Remediation

Hexel may suspend or terminate agents or accounts if:

  • This Policy is violated
  • Agent behavior poses material risk
  • Required by law or regulation

Hexel may act immediately where necessary to prevent harm.

11. Regulatory Compliance

Customers are responsible for compliance with:

  • Data protection laws
  • AI governance laws
  • Industry-specific regulations

Hexel provides infrastructure support but does not certify regulatory compliance.

12. Changes to This Policy

Hexel may update this Policy from time to time. Continued use of the Services after changes become effective constitutes acceptance.

13. Contact Information