Note: The latest updates generated by the GitHub Action can be viewed at: https://bhavink.github.io/databricksIPranges/
Databricks IP ranges for firewall allowlisting — all supported clouds: AWS, Azure, and GCP.
A Python utility that retrieves, processes, and organizes the official Databricks IP ranges and produces per-cloud, PA-compatible TXT files.
Source: Databricks IP ranges JSON (live). Official docs: AWS | Azure | GCP.
- Automatically fetches the latest Databricks IP ranges JSON
- Processes and organizes IP ranges by cloud (AWS, Azure, GCP) and type (inbound / outbound)
- Creates individual text files per cloud and type (e.g.
aws.txt,azure-outbound.txt,gcp.txt) - Format compatible with Palo Alto Networks (PA) devices (one CIDR per line)
- Maintains a history of JSON files
- Generates a user-friendly web interface to browse the data
- Downloads the latest JSON from Databricks' official endpoint
- Normalizes and filters by cloud, region, and type (inbound/outbound)
- Creates separate text files per cloud/type in the
docs/output/directory - Maintains a history of JSON snapshots in
docs/json-history/ - Generates an index page and directory listing for easy browsing and automation
Official JSON endpoint: https://www.databricks.com/networking/v1/ip-ranges.json
Root keys: timestampSeconds, schemaVersion, prefixes[]. Each entry in prefixes:
| Field | Values |
|---|---|
platform |
aws · azure · gcp |
region |
e.g. us-east-1, eastus, europe-west1 |
service |
Databricks |
type |
inbound · outbound |
ipv4Prefixes |
array of CIDR strings |
ipv6Prefixes |
array of CIDR strings |
The script normalizes this to a flat one-row-per-CIDR structure. --format csv produces:
cloud,region,type,cidr,ipVersion,service
aws,us-east-1,outbound,52.5.180.253/32,ipv4,Databricks
--format json outputs an array of the same flat objects. --format simple (default) outputs one CIDR per line.
The repository is updated weekly via GitHub Actions (Databricks refreshes IP ranges every two weeks). Pre-built TXT files are available in output/ so you can download or script against them without running the extractor yourself.
Default output is simple (one CIDR per line). Use --format csv or --format json for other formats.
# Per cloud (AWS, Azure, or GCP) — one CIDR per line
python extract-databricks-ips.py --cloud aws
python extract-databricks-ips.py --cloud azure
python extract-databricks-ips.py --cloud gcp
# Outbound only (egress allowlisting)
python extract-databricks-ips.py --cloud aws --type outbound
# Specific regions, save to file
python extract-databricks-ips.py --cloud aws --region us-east-1,eu-west-1 --output aws-ips.txt- Fork the repository – Recommended so you control when and how you consume updates.
- Regular updates – Databricks updates the JSON periodically; the weekly Action keeps the site current.
- Verification – Always verify IP ranges against your requirements before implementation.
- Regions and clouds – Availability may vary by cloud and region; use
--list-regionsto discover available regions.--list-serviceswill returnDatabricks— that is the only service value in the data.
The script produces output in a format compatible with Palo Alto Networks (PA) devices. Each cloud/type combination is available as a separate TXT file (e.g. aws-outbound.txt, azure.txt) for easy import into firewall rules or automation.
For production-grade guidance on automating firewall rule updates across AWS, Azure, GCP, and Palo Alto Networks — including Lambda/Function App code, Managed Prefix Lists, IP Groups, Hierarchical Firewall Policies, EDL configuration, and Terraform patterns — see:
For full CLI options, run python extract-databricks-ips.py --help.
This repository and its contents are provided "AS IS" without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. The maintainers are not responsible for any damages that may occur from the use or inability to use these scripts or associated files.
Contributions are welcome. Feel free to:
- Open issues for discussion
- Submit pull requests
- Suggest improvements or report bugs
This project is open-source and licensed under the MIT License.
Databricks IP ranges across AWS, Azure, and GCP — fetched from the official Databricks JSON, organized by cloud and type (inbound/outbound), and output as TXT files compatible with Palo Alto Networks. Files are published to the output directory and updated weekly via GitHub Actions.