o
    Á¿iÁ  ã                   @   s   d Z dS )a¬  
The LangChain integration instruments the LangChain Python library to emit traces for requests made to the LLMs,
chat models, embeddings, chains, and vector store interfaces.

All traces submitted from the LangChain integration are tagged by:

- ``service``, ``env``, ``version``: see the `Unified Service Tagging docs <https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging>`_.
- ``langchain.request.provider``: LLM provider used in the request.
- ``langchain.request.model``: LLM/Chat/Embeddings model used in the request.
- ``langchain.request.api_key``: LLM provider API key used to make the request (obfuscated into the format ``...XXXX`` where ``XXXX`` is the last 4 digits of the key).

**Note**: For ``langchain>=0.1.0``, this integration drops tracing support for the following deprecated langchain operations in favor
of the recommended alternatives in the `langchain changelog docs <https://python.langchain.com/docs/changelog/core>`_.
This includes:

- ``langchain.chain.Chain.run/arun`` with ``langchain.chain.Chain.invoke/ainvoke``
- ``langchain.embeddings.openai.OpenAIEmbeddings.embed_documents`` with ``langchain_openai.OpenAIEmbeddings.embed_documents``
- ``langchain.vectorstores.pinecone.Pinecone.similarity_search`` with ``langchain_pinecone.PineconeVectorStore.similarity_search``

**Note**: For ``langchain>=0.2.0``, this integration does not patch ``langchain-community`` if it is not available, as ``langchain-community``
is no longer a required dependency of ``langchain>=0.2.0``. This means that this integration will not trace the following:

- Embedding calls made using ``langchain_community.embeddings.*``
- Vector store similarity search calls made using ``langchain_community.vectorstores.*``
- Total cost metrics for OpenAI requests

Enabling
~~~~~~~~

The LangChain integration is enabled automatically when you use
:ref:`ddtrace-run<ddtracerun>` or :ref:`import ddtrace.auto<ddtraceauto>`.

Note that these commands also enable the ``requests`` and ``aiohttp``
integrations which trace HTTP requests to LLM providers, as well as the
``openai`` integration which traces requests to the OpenAI library.

Alternatively, use :func:`patch() <ddtrace.patch>` to manually enable the LangChain integration::
    from ddtrace import config, patch

    # Note: be sure to configure the integration before calling ``patch()``!
    # config.langchain["logs_enabled"] = True

    patch(langchain=True)

    # to trace synchronous HTTP requests
    # patch(langchain=True, requests=True)

    # to trace asynchronous HTTP requests (to the OpenAI library)
    # patch(langchain=True, aiohttp=True)

    # to include underlying OpenAI spans from the OpenAI integration
    # patch(langchain=True, openai=True)


Configuration
~~~~~~~~~~~~~

.. py:data:: ddtrace.config.langchain["service"]

   The service name reported by default for LangChain requests.

   Alternatively, set this option with the ``DD_LANGCHAIN_SERVICE`` environment variable.

N)Ú__doc__© r   r   ú_/home/ubuntu/.local/lib/python3.10/site-packages/ddtrace/contrib/internal/langchain/__init__.pyÚ<module>   s    