# How Data Tokenization Works

Data tokenization involves the substitution of sensitive information with unique tokens, typically generated through an algorithm and stored in a secure tokenization system. When sensitive data, such as credit card numbers or personal identifiers, is processed or stored, it is replaced with these tokens.

Importantly, the tokenization process is irreversible, meaning it cannot be reversed to reveal the original data without access to the tokenization system. The tokenized data retains no inherent value on its own, providing a robust layer of security. The actual sensitive information is stored in a secure vault or database, isolated from systems that use the tokens for day-to-day operations.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://enphyr.gitbook.io/enphyr-litepaper/+-data-tokenization/how-data-tokenization-works.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
