# Call a Prompt
> Learn how to call your Prompts that are managed on Humanloop.
This guide will show you how to call your Prompts through the API, enabling you to generate responses from the large language model while versioning your Prompts.
You can call an existing Prompt on Humanloop, or you can call a Prompt you're managing in code. These two use-cases are demonstrated below.
### Prerequisites
First you need to install and initialize the SDK. If you have already done this, skip to the next section.
Open up your terminal and follow these steps:
1. Install the Humanloop SDK:
```python
pip install humanloop
```
```typescript
npm install humanloop
```
2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)).
```python
from humanloop import Humanloop
humanloop = Humanloop(api_key="")
# Check that the authentication was successful
print(humanloop.prompts.list())
```
```typescript
import { HumanloopClient, Humanloop } from "humanloop";
const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" });
// Check that the authentication was successful
console.log(await humanloop.prompts.list());
```
## Call an existing Prompt
If you don't have Prompt already on Humanloop, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first.
### Get the Prompt ID
In Humanloop, navigate to the Prompt and copy the Prompt ID by clicking Prompt name in the top bar, and copying from the popover.
### Call the Prompt by ID
Now you can use the SDK to generate completions and log the results to your Prompt:
This can be done by using the [Prompt Call](/docs/api/prompts/call) method in the SDK.
## Call a Prompt defined in code
You can also manage your Prompts in code. Pass the `prompt` details within your API call to generate responses with
the specified parameters.
## View your Prompt Logs
Navigate to the **Logs** tab of your Prompt.
You will be able to see the recorded inputs, messages and model generations.
## Next steps
* [Iterate and improve on your Prompts](../evals/comparing-prompts) in the Editor
* [Capture end-user feedback](../observability/capture-user-feedback) to monitor your model performance.