Simple RAG Consumption with Azure OpenAI and AI Search > Using the Simple RAG Consumption with Azure OpenAI and AI Search recipe > Step 5. Invoke the process
  

Step 5. Invoke the process

When you invoke the Query LLM with Context using Embeddings Model process, the user sees the answer matching the context that was used in the LLM request.
You can run the process using one of the following options:
For example, you can pass the input parameters with the Run Using option as shown in the following image:
The image shows the sample input parameters that you can pass using the Run Using option.