Navigating the Ethical Landscape of Legal AI: A Practical Guide for Lawyers

As artificial intelligence (AI) tools become more integral to the legal profession, lawyers must be vigilant about maintaining ethical standards when adopting these technologies. Whether it's enhancing legal research, automating document review, or improving client communications, AI offers numerous benefits. However, navigating the ethical landscape—particularly around duties of competence, confidentiality, and client communications—is critical.

Both the ABA and California Bar have issued guidance on the responsible use of AI in legal practice, emphasizing that lawyers must ensure compliance with ethical duties when using these tools. In this post, we’ll explore how off-the-shelf AI tools fit into these frameworks and how legal professionals can use them responsibly without compromising ethical obligations.

1. Duty of Competence: Understanding AI’s Capabilities and Risks

Under ABA Model Rule 1.1 and California Rule of Professional Conduct 1.1, lawyers are required to provide competent representation, which includes understanding the technology used in their practice. AI tools, particularly off-the-shelf solutions like GC AI and Marveri can dramatically improve efficiency, but lawyers must ensure they understand the tools’ capabilities and limitations.

For example, Spellbook—a widely used AI tool for document analysis—complies with the Data Privacy Framework, ensuring that client data is encrypted and protected. It is crucial though for lawyers to verify whether any Legal AI tool aligns with their firm's specific needs and ethical standards. CoCounsel, which comes with SOC2 Type 2 attestation and encrypted data usage, provides secure, audited AI services, but lawyers must still critically review the outputs for accuracy and ensure they meet legal requirements.

Competence means not only using these tools effectively but also understanding their underlying security protocols. A lawyer must remain vigilant, regularly reviewing the terms of service and privacy policies to ensure that their chosen tools are being used appropriately. The California Bar’s Practical Guidance emphasizes that lawyers must know how their AI tools handle data, particularly when client information is involved.

2. Duty of Confidentiality: Protecting Client Information

The duty to protect client confidentiality is at the heart of legal ethics, as outlined in ABA Model Rule 1.6 and California Rule 1.6. When using AI tools, this duty extends to ensuring that sensitive client data remains secure and is not improperly accessed or shared.

Each AI tool comes with its own data protection measures, and lawyers must verify that the chosen tools meet the highest standards of data security:

  • GetGC.AI uses OpenAI and Anthropic models, but ensures compliance with California Bar AI guidance, incorporating data isolation and encryption to prevent model training on user inputs.

  • Henchman, which operates on Azure OpenAI Service, is GDPR-designed and compliant with SOC 2 and ISO 27001 standards, offering robust risk management programs and assurances that customer data is never used for AI training.

  • Responsiv, stands out for its commitment to anonymized data usage and encrypted storage, ensuring no personal information is sold or improperly shared.

Before inputting client data into any AI system, lawyers must confirm that the tool’s data handling protocols align with their ethical obligations. Reviewing the privacy policies and security pages provided by vendors like Spellbook, Responsiv, and SpotDraft is essential to ensure the protection of client information.

In cases where AI tools lack clarity on how data is managed lawyers should avoid using them for any sensitive client work until such gaps are addressed. The ABA's Formal Opinion 512 reinforces that lawyers must not rely on AI tools that could expose client data to unauthorized parties, intentionally or inadvertently.

3. Client Communications: Transparency and Informed Consent

Client communication is a cornerstone of legal practice, and AI usage must be disclosed when relevant to a case. Both ABA Model Rule 1.4 and California Rule 1.4 stress the importance of keeping clients informed and seeking their consent when necessary—especially when AI tools are being used to process their data or assist in their case.

Lawyers using AI tools must be transparent with their clients about how these tools will affect their representation. For instance, a firm using DraftWise or CoCounsel to review contracts should inform clients that these tools expedite processes while maintaining the necessary standards of accuracy and security. However, it is the lawyer’s responsibility to critically assess the tool's output to ensure the final work product reflects human legal judgment and is error-free.

Tools like Decover, which anonymize data and ensure no sale of usage information, make it easier to communicate these safeguards to clients. Still, firms should always review the privacy and data security features of each tool and disclose them to clients as needed. If a client’s consent is required for certain uses of AI tools, it’s essential to provide full transparency about potential risks and benefits.

The California Bar’s Practical Guidance and the ABA’s Formal Opinion 512 recommend that firms create clear policies around AI use, including when and how client information is handled by AI. This can help avoid misunderstandings and ensure clients are confident in the technology being used on their behalf.

Off-the-Shelf AI Tools: Ethical Considerations

While off-the-shelf AI tools provide efficiency and innovation, ethical usage requires careful consideration of their security protocols and data management policies. Here’s a snapshot of some popular tools and their ethical considerations:

  • GetGC.AI: Ensures compliance with California Bar AI guidelines, with robust data isolation and encryption protocols.

  • Henchman: A GDPR-designed tool, SOC 2 and ISO 27001 compliant, offering significant protections against data misuse.

  • Responsiv: Offers anonymized data usage and encrypted storage, with a strong commitment to privacy.

  • CoCounsel: Combines SOC2 Type 2 attestation with independent audits, ensuring high levels of security.

Many of the off-the-shelf Legal AI tools also provide varying levels of data protection, and lawyers must review these policies before integrating them into legal workflows. Always ensure the tool meets your ethical obligations regarding client data before use.

Conclusion: Ethical AI Use in Legal Practice

AI technology can be a powerful ally for law firms, but it comes with ethical responsibilities that cannot be ignored. From maintaining competence in the use of AI to ensuring client confidentiality and transparent communication, lawyers must take proactive steps to meet their ethical obligations.

By staying informed, conducting due diligence on AI providers, and keeping clients in the loop, lawyers can successfully integrate AI into their practice while maintaining the highest ethical standards. At Aloha Legal, we advocate for a thoughtful approach to AI adoption—one that respects both the power of these tools and the ethical duties that come with them.

 

Legal AI has considered your privacy, security, confidentiality and professional responsibility obligations, have you?

Robb Miller

Dual national (US & Canadian) attorney - legal (securities, IP, corporate) and extra-legal services to technology companies, investors and funds.  Passion for Fintech, AI, LegalTech, Healthtech and other disruptive technologies, loves to help companies with corporate identity, and partnership ecosystems

https://robbmiller.me
Previous
Previous

AI-Prompt Engineering for Lawyers and Law Students: Tips to Get the Most Out of AI Tools

Next
Next

Off-the-Shelf: How Legal AI is Transforming Legal Practices