Splunk Admin Config Service (ACS) API is a super powerful tool that allows you to manage your Splunk Cloud instance programmatically. In this post, I’ll showcase some use cases for Splunk Cloud’s ACS API as well as provide some real-world examples.
The examples that I show in this post mainly focus around managing indexes. However, the methodology can be applied to other ACS features as well.
Here are some assumptions that I’m making for this post:
- You have a Splunk Cloud instance.
- You are on a Linux or Mac machine and you are comfortable using the command line.
- You have read through the Splunk Cloud ACS Documentation and understand the basics of ACS.
- You will not use ACS until you have a good understanding of what it does and how it works.
- You are using this post as a reference and not as a guide to using ACS.
Documentation for the Splunk Cloud ACS Commands on version 9.1.2308: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Config/ACSreqs
ACS features
These are the ACS features that are available in Splunk Cloud depending on the version of Splunk Cloud you are using. You can modify the methods in the examples to fit your specific use case.
Source: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Config/ACSreqs
ACS feature | Victoria Experience | Classic Experience |
---|---|---|
IP allow lists | Yes | Yes |
Outbound ports | Yes | Yes |
Private connectivity | Yes | Yes |
App permissions | Yes | No |
Authentication tokens | Yes | Yes |
HEC tokens | Yes | Yes |
Indexes | Yes | Yes |
limits.conf configuration | Yes | No |
Private apps | Yes | Yes * |
Splunkbase apps | Yes | Yes * |
Restarts | Yes | Yes |
Users, roles, and capabilities | Yes | Yes |
Maintenance windows | Yes | Yes |
ACS CLI | Yes | Yes ** |
Terraform Provider for ACS | Yes | Yes |
FedRAMP IL2 | No | Yes |
Basic Setup
Setting up an API Token
Make an API token through Settings > Token > Enable Token Authentication > New Token. Remember that the user’s role will determine what the token can do. See the Splunk Docs for more information.
Copy the token and set it as an environment variable in your shell by running the following command:
export SPLUNK_API_TOKEN="<your_api_token>"
Now when you run ACS commands, you can use the variable $SPLUNK_API_TOKEN
to authenticate.
Here are a few things to note:
- The token is only shown once. If you lose it, you will need to create a new one.
- This isn’t the most optimal way to handle the token, but it’s a simple way to handle it for this example.
- Set a expiration date that is appropriate for your use case. I recommend setting a short expiration date since its easy to create a new token. This will help reduce the risk of unauthorized access.
Setting up the Stack Name
Before using ACS, lets set the stack name as a variable. This just makes it a bit easier to use the ACS. Your stack name is the first part of the URL you use to access your Splunk Cloud instance. For example, if your Splunk Cloud instance URL is https://yourstackname.splunkcloud.com
, then your stack name is yourstackname
.
export SPLUNK_STACK_NAME="<your_stack_name>"
Additional Notes for Setup
At this point, you may also need to configure the IP allow list to allow your IP to access the Splunk Cloud instance. Please refer to the Splunk Docs for more information.
Using ACS CLI
Before we start, make sure that the SPLUNK_API_TOKEN
and SPLUNK_STACK_NAME
environment variables are set. You can do this by running the following commands:
echo $SPLUNK_API_TOKEN
echo $SPLUNK_STACK_NAME
Using JQ
Let’s take a quick minute to talk about the jq
command. jq
is a command-line JSON processor. The output of the ACS curl commands will be in JSON format. To make it easier to read, you can use the jq
command to parse the JSON output.
Most Linux distributions come with jq
installed, but you can run jq --help
to see if you have it installed. If you don’t, I highly recommend installing it. It’s a very useful tool for parsing JSON output.
In our commands, we will use jq
to parse the JSON output. You may be able to get away with using something like grep
or awk
to parse the JSON output, but jq
is a much more robust tool for parsing JSON output.
jq
Examples
In this example, there is a flat JSON array. We can use jq
to pretty print the output.
echo '[{"name":"_internal","datatype":"event","maxDataSizeMB":19000000,"searchableDays":180}]' | jq '.'
The output should look something like this:
[
{
"name": "_internal",
"datatype": "event",
"maxDataSizeMB": 19000000,
"searchableDays": 180
}
]
Additionally, we can use jq
to parse the JSON output. In this example, we are getting the name of the index.
echo '[{"name":"_internal","datatype":"event","maxDataSizeMB":19000000,"searchableDays":180}]' | jq -r '.[].name'
The output should look something like this:
_internal
Example Use Cases for ACS
List Indexes
We will run the following command to list all the indexes in the Splunk Cloud instance. Keep in mind that by default, it will only return a maximum of 30 indexes. If you have more than 30 indexes, you will need to use the offset
and limit
parameters to get the rest of the indexes.
curl "https://admin.splunk.com/$SPLUNK_STACK_NAME/adminconfig/v2/indexes" --header "Authorization: Bearer $SPLUNK_API_TOKEN" | jq -r '.[].name'
Example output:
_internal
_audit
...
Creating an Index
We will run the following command to create an index in the Splunk Cloud instance.
curl -X POST "https://admin.splunk.com/$SPLUNK_STACK_NAME/adminconfig/v2/indexes" \
--header "Authorization: Bearer $SPLUNK_API_TOKEN" \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "test",
"Datatype": "event",
"MaxDataSizeMB": 512,
"SearchableDays": 10
}' | jq '.'
If the index was created successfully, the output should look something like this:
{
"datatype": "event",
"maxDataSizeMB": 512,
"name": "test",
"searchableDays": 10
}
If the index already exists, the output should look something like this:
{
"code": "409-object-already-exists",
"message": "Index name test already exists. To update an existing index, please use PATCH. Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips."
}
Bulk Creating Indexes
Gather the Indexes
First, we need to make a CSV with the indexes we want to create with their respective configurations. In this example, I’m going to pull the index configs from an All-in-One Splunk instance. You can use the | rest /services/data/indexes
command to get the indexes and their configurations. Keep in mind that if you have a distributed environment, you will need to update the search to get the indexes from all the indexers. Additionally, in this setup, I’m assuming that the indexes are the same across all the indexers and they are all event
indexes.
| rest /services/data/indexes splunk_server=local
| fields title frozenTimePeriodInSecs maxTotalDataSizeMB
| rename maxTotalDataSizeMB as maxDataSizeMB, title as name
| eval searchableDays=round(frozenTimePeriodInSecs/86400), datatype="event"
| table name datatype maxDataSizeMB searchableDays
| search NOT name IN ("_*", "main", "history", "splunklogger", "summary", "lastchanceindex")
Once you get some results, you can add a final search
command to remove any indexes you don’t want to create. Here are some example indexes that I don’t want to create:
| search NOT name IN ("_*", "main", "history", "splunklogger", "summary", "lastchanceindex")
You can export the results to a CSV and it should look something like this. I’ll call it indexes_to_create.csv
.
name,datatype,maxDataSizeMB,searchableDays
"_audit",event,19000000,400
"_internal",event,19000000,180
...
Note: These are just example indexes. You won’t be able to create these specific indexes in a Splunk Cloud since they are reserved indexes. The same can be said for other reserved indexes like
main
,history
,splunklogger
,summary
,lastchanceindex
. Additionally, keep in mind that indexes need to start with lowercase letters or numbers and can only contain lowercase letters, numbers, underscores, and hyphens.
Converting CSV to JSON
Now we need to convert the CSV to a JSON file. We can use awk
and jq
to do this. Run the following command to convert the CSV to a JSON file. I’ll call the JSON file indexes_to_create.json
. Again, I called the CSV file indexes_to_create.csv
.
awk -F, 'NR>1 {print "{\"name\":\""$1"\",\"datatype\":\""$2"\",\"maxDataSizeMB\":"$3",\"searchableDays\":"$4"}"}' indexes_to_create.csv | jq -s '.' > indexes_to_create.json
The awk
command will take the CSV and convert it to a JSON array. The jq -s '.'
command will take the JSON array and convert it to a JSON object.
Open the indexes_to_create.json
file and it should look something like this:
[
{
"name": "_audit",
"datatype": "event",
"maxDataSizeMB": 19000000,
"searchableDays": 400
},
{
"name": "_internal",
"datatype": "event",
"maxDataSizeMB": 19000000,
"searchableDays": 180
},
...
]
Testing Bulk Create Indexes
Now we can use a combination of jq
and xargs
to create the indexes. But before we make the indexes, we can run the following command to ensure that xargs is working properly. It will simply print out the curl commands that will be run.
cat indexes_to_create.json | jq -c '.[]' | while read -r index; do
echo curl -X POST "https://admin.splunk.com/$SPLUNK_STACK_NAME/adminconfig/v2/indexes" \
--header "Authorization: Bearer $SPLUNK_API_TOKEN" \
--header 'Content-Type: application/json' \
--data-raw "$index"
sleep 60
done
Bulk Create Indexes
Once you verify that the curl commands are correct, you can run the following command to create the indexes.
Note: You can update this command to be a bit more robust. For example, you can add a mechnism to first check the indexes to see if they already exist. In this setup, we are relying on the curl to fail if the index already exists. This is not the best way to handle this, but it’s a simple way to handle it for this example.
cat indexes_to_create.json | jq -c '.[]' | while read -r index; do
echo "Creating index: $(echo "$index" | jq -r '.name')"
curl -X POST "https://admin.splunk.com/$SPLUNK_STACK_NAME/adminconfig/v2/indexes" \
--header "Authorization: Bearer $SPLUNK_API_TOKEN" \
--header 'Content-Type: application/json' \
--data-raw "$index"
sleep 30
done
The output should look something like this:
Creating index: _audit
{"datatype":"event","maxDataSizeMB":19000000,"name":"_audit","searchableDays":400}
Creating index: _internal
{"datatype":"event","maxDataSizeMB":19000000,"name":"_internal","searchableDays":180}
Deleting an Index
We will run the following command to delete an index in the Splunk Cloud instance.
curl -X DELETE "https://admin.splunk.com/$SPLUNK_STACK_NAME/adminconfig/v2/indexes/test" --header "Authorization: Bearer $SPLUNK_API_TOKEN" | jq '.'
If the index was deleted successfully, the output should look something like this:
""
If the index does not exist, the output should look something like this:
{
"code": "404-index-not-found",
"message": "test index not found. Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips."
}
Next Steps
I hope this intro provides you some insight into working with ACS. Please be sure to check out the Splunk Documentation priot to using ACS in a production environment. Additionally, check out the Administer Splunk Cloud Platform using the Admin Config Service (ACS) API section of the ACS Splunk Docs for more use cases. What will you make next? Maybe a script to bulk update indexes? Or a script to manage users and roles? I’ll leave that up to you. 🙂