Terraform is traditionally used for managing virtual infrastructure, but there are organisations out there that use Terraform end-to-end and also want to manage configuration state using the same methods for managing infrastructure. Sure, we can run a provisioner with Terraform, but that wasn't asked for!
Much the same as you can use Terraform to create an AWS EC2 instance, you can manage the configurational state of Junos. In essence, we treat Junos configuration as declarative resources.
So what is JTAF? It's a framework, meaning, it's an opinionated set of tools and steps that allow you to go from YANG models to a custom Junos Terraform provider. With all frameworks, there are some dependencies.
To use JTAF, you'll need machine that can run Go, Python, Git and Terraform. This can be Linux, OSX or Windows. Some easy to consume videos are below.
Run the following commands to set up the Junos-Terraform Environment and Workflow
git clone https://github.com/juniper/junos-terraform
git clone https://github.com/juniper/yang
cd junos-terraform
python3 -m venv venv
. venv/bin/activate
pip install -e .If you do not already have Terraform installed (in general), for macOS, run the following:
brew tap hashicorp/tap
brew install hashicorp/tap/terraformFor more information, refer to the Terraform website: https://developer.hashicorp.com/terraform/install.
Find the device's Junos Version that is running, and locate the corresponding yang and common folders. Run the below pyang command to generate a .json file containing .yang information for that version. [See below example for Junos version 18.2]
pyang --plugindir $(jtaf-pyang-plugindir) -f jtaf -p <path-to-common> <path-to-yang-files> > junos.json
Example:
pyang --plugindir $(jtaf-pyang-plugindir) -f jtaf -p examples/yang/18.2/18.2R3/common examples/yang/18.2/18.2R3/junos-qfx/conf/*.yang > junos.json
NOTE: This repository includes YANG examples for 18.2 under examples/yang/18.2.
Now run the following command to generate a resource provider.
jtaf-provider -j <json-file> -x <xml-configuration(s)> -t <device-type>Example:
jtaf-provider -j junos.json -x examples/evpn-vxlan-dc/dc1/*{spine,leaf}*.xml examples/evpn-vxlan-dc/dc2/*spine*.xml -t vqfxNOTE: If using multiple xml configurations (like the example above), ensure that the configurations are for the same device type
All in one example (-j accepts - for stdin for jtaf-provider):
pyang --plugindir $(jtaf-pyang-plugindir) -f jtaf -p examples/yang/18.2/18.2R3/common examples/yang/18.2/18.2R3/junos-qfx/conf/*.yang | jtaf-provider -j - -x examples/evpn-vxlan-dc/dc1/*{spine,leaf}*.xml examples/evpn-vxlan-dc/dc2/*spine*.xml -t vqfxUse jtaf-yang2go command to generate a resource provider in a single step by supplying all YANG files with the -p option, the device XML configuration with -x, and the device type with -t.
jtaf-yang2go -p <path-to-common> <path-to-yang-files> -x <xml-configuration(s)> -t <device-type>Example:
jtaf-yang2go -p examples/yang/18.2/18.2R3/common examples/yang/18.2/18.2R3/junos-qfx/conf/*.yang -x examples/evpn-vxlan-dc/dc1/*{spine,leaf}*.xml examples/evpn-vxlan-dc/dc2/*spine*.xml -t vqfxNOTE: If using multiple xml configurations (like the example above), ensure that the configurations are for the same device type
NOTE: The examples in this README use the YANG files shipped in this repository under examples/yang/18.2.
cd into the newly created directory starting with terraform-provider-junos- then the device-type and then go install
Example:
cd terraform-provider-junos-vqfx
go install .
Run a command to generate a .tf test file to deploy the Terraform provider.
NOTE: Output is written to a directory (-d) as providers.tf plus one .tf file per XML input.
Flag Options:
- -j
- Required: trimmed_json output file from jtaf-provider (stored in terraform provider folder /terraform-provider-junos-"device-type")
- -x
- Required: File(s) of xml config to create terraform files for
- -t
- Required: Junos device type
- -d
- Required: Output directory where providers.tf and per-device Terraform files are written
- -u
- Optional: Device username
- -p
- Optional: Device password
To create multiple Terraform (.tf) files from multiple config files, where each .tf file will represent one xml file, use the following command (output returned to specified directory name):
jtaf-xml2tf -j <path-to-trimmed-schema> -x <path-to-config-files(s)> -t <device-type> -d <testing-folder-name>
Example:
- trimmed_schema - stored in terraform provider folder created from running the jtaf-provider module command (usually in terraform-provider-junos-'device-type')
- xml_files - directory containing xml file(s) (ensure xml file(s) are for the same device type)
jtaf-xml2tf -j terraform-provider-junos-vqfx/trimmed_schema.json -x examples/evpn-vxlan-dc/dc1/*{spine,leaf}*.xml examples/evpn-vxlan-dc/dc2/*spine*.xml -t vqfx -d testbed
- If the user wants to provide the device(s) username and password, those additional flags can be added as well
jtaf-xml2tf -j terraform-provider-junos-vqfx/trimmed_schema.json -x examples/evpn-vxlan-dc/dc1/*{spine,leaf}*.xml examples/evpn-vxlan-dc/dc2/*spine*.xml -t vqfx -d testbed -u root -p password
Using the output which is outputted to the specified directory from the command, which represents a template for the HCL .tf file for each input XML file, we can now create our testing environment and fill in the template with any remaining necessary device or config information.
Now that we ran the jtaf-xml2tf command and have our testing folder setup:
- The command writes files directly under your test folder in the
/junos-terraformdirectory.
Next, create a .terraformrc file in your home directory, (cd ~), with vi and add the following contents, replacing any <elements> tags with your own information. This is to ensure that the terraform plugin you created and installed to /go/bin will be read.
.terraformrc example
provider_installation {
dev_overrides {
"registry.terraform.io/hashicorp/junos-<device-type>" = "<path-to-go/bin>"
}
direct {}
}
Example:
provider_installation {
dev_overrides {
"registry.terraform.io/hashicorp/junos-vqfx" = "/Users/patelv/go/bin"
}
direct {}
}
You should now have a file structure which looks similar to:
- (if you created one terraform test file)
/junos-terraform/<testing-folder-name>/
/junos-terraform/<testing-folder-name>/providers.tf
/junos-terraform/<testing-folder-name>/<hostname>.tf
/Users/<username>/.terraformrc <-- link to provider created in /usr/go/bin/ [see details above]
OR:
- (if you used the -d flag during the
jtaf-xml2tfcommand and created a directory of multiple terraform test files)
/junos-terraform/<testing-folder-name>/ <-- contents of jtaf-xml2tf command
/junos-terraform/<testing-folder-name>/dc1-borderleaf1.tf
/junos-terraform/<testing-folder-name>/dc1-borderleaf2.tf
/junos-terraform/<testing-folder-name>/dc1-leaf1.tf
/junos-terraform/<testing-folder-name>/dc1-leaf2.tf
/junos-terraform/<testing-folder-name>/dc1-leaf3.tf
/junos-terraform/<testing-folder-name>/dc1-spine1.tf
/junos-terraform/<testing-folder-name>/dc1-spine2.tf
/junos-terraform/<testing-folder-name>/dc2-spine1.tf
/junos-terraform/<testing-folder-name>/dc2-spine2.tf
/Users/<username>/.terraformrc <-- link to provider created in /usr/go/bin/ [see details above]
In the test file(s), devices being configured are specified using the host field as shown below:
provider "junos-vqfx" {
host = "dc1-leaf1"
port = 22
username = ""
password = ""
alias = "dc1_leaf1"
}
You can either specify the exact IP address in the host field OR use a hostname (like in the example above) and provide the IP address for every hostname in the system file /etc/hosts using vi.
NOTE: If the /etc/hosts file is a READ-ONLY file, then try using sudo su then re-run vi /etc/hosts. Exit after editing and return back to user control.
Example:
127.0.0.1 localhost
<IP address> dc1-leaf1
<IP address> dc1-leaf2
<IP address> dc1-leaf3
<IP address> dc2-spine1
<IP address> dc2-spine2
<IP address> dc1-spine1
<IP address> dc1-borderleaf2
<IP address> dc1-borderleaf1
<IP address> dc1-firewall1
<IP address> dc1-firewall2
<IP address> dc2-firewall1
<IP address> dc1-spine2
<IP address> dc2-firewall2
Once the .terraformrc file is set up, and the generated test file(s) contain access to the provider, information regarding the desired devices to push the configuration to, and the desired config in HCL format, we are now ready to use the provider.
terraform plan
terraform apply -auto-approve
Create an Ansible role + playbook from a Junos JSON schema and one or more XML configs. The generated playbook runs locally and renders configs (does not connect to devices) by default.
Quick usage:
jtaf-ansible -j <junos.json> -x <config1.xml> [-x <config2.xml> ...] -t <device-type>
What is created (under ansible-provider-junos-/):
- roles/_role/ (tasks/main.yml, templates/template.j2)
- jtaf-playbook.yml (uses connection: local)
- host_vars/, configs/, trimmed_schema.json
Verify rendering without applying:
cd ansible-provider-junos-<type>
ansible-playbook -i "localhost," jtaf-playbook.yml --check --diff
Generate an Ansible role + playbook in one step from YANG files and XML config(s):
jtaf-yang2ansible -p <path-to-common> <path-to-yang-files> -x <xml-config(s)> -t <device-type>
Example:
jtaf-yang2ansible -p examples/yang/18.2/18.2R3/common examples/yang/18.2/18.2R3/junos-qfx/conf/*.yang -x examples/evpn-vxlan-dc/dc1/*spine*.xml -t vqfx
Notes:
- If supplying multiple XML configs they must be for the same device type.
- Output directory: ansible-provider-junos-/ containing roles/_role/ (tasks/templates), jtaf-playbook.yml (connection: local), host_vars/, group_vars/, configs/, trimmed_schema.json.
- Run the generated playbook in check/diff mode to verify rendered configs without applying: ansible-playbook -i "localhost," jtaf-playbook.yml --check --diff
Convert one or more Junos XML configs into Ansible host_vars, group_vars, and inventory data.
Important behavior:
- Each run should use one
trimmed_schema.jsonand matching XML configs for the generated role. --grouping-hosts-fileis required.- Inventory groups and optional
:childrensections come from thegrouping.hostsfile, not from a-t/--typeflag. - Output can be split across directories so generated role location and provisioning playbook location can differ.
- Repeated runs are merge-safe when they reuse the same
-d/--directoryoutput directory:group_vars/all.yamlstores values shared across all hosts currently tracked in that output directory.group_vars/<group>/all.yamlstores per-group deltas for the groups defined ingrouping.hosts.- Host-specific differences are preserved in each
host_vars/<host>.yamlfile. - Existing inventory hosts/groups are merged (not clobbered).
Usage:
jtaf-xml2yaml -j <trimmed_schema.json> -x <config1.xml> [<config2.xml> ...] -d <output-dir> --grouping-hosts-file <grouping_hosts_file>
Example:
jtaf-xml2yaml -j ansible-provider-junos-vqfx/trimmed_schema.json \
-x examples/evpn-vxlan-dc/dc1/dc1-leaf1.xml examples/evpn-vxlan-dc/dc1/dc1-spine1.xml \
-d ansible_files \
--grouping-hosts-file examples/ansible/switches_grouping_hosts
Output:
- Creates
host_vars/<hostname>.yamlfor every XML file provided (hostname is file base name orsystem/host-namefrom XML). - Writes
group_vars/all.yamlfor keys shared across all tracked hosts in the output directory. - Writes
group_vars/<group>/all.yamlfor each group defined ingrouping.hosts. - Writes/updates inventory hosts file with
[all],[group], and[group:children]sections taken fromgrouping.hosts.
This output feeds into the Ansible role/playbook created by jtaf-ansible/jtaf-yang2ansible.
This walkthrough is for first-time Ansible users and reflects the recommended split layout:
- JTAF-generated role in one directory.
- Operator playbook/inventory/vars in another directory.
- Install Ansible dependencies on your control node
Note that Ansible must be installed in the virtual environment (venv). Additionally, the following system packages and Python modules are required:
python -m pip install --upgrade pip
sudo dnf install ansible -y
sudo dnf install python3-pip -y
/usr/bin/python3 -m pip install ncclient junos-eznc jxmlease
# Install Juniper collection used to push config.
ansible-galaxy collection install juniper.device- Generate the first role (QFX EVPN-VXLAN) from YANG + XML
# Generate role + templates from YANG + XML
jtaf-yang2ansible \
-p examples/yang/18.2/18.2R3/common \
examples/yang/18.2/18.2R3/junos-qfx/conf/*.yang \
-x \
examples/evpn-vxlan-dc/dc1/dc1-borderleaf1.xml \
examples/evpn-vxlan-dc/dc1/dc1-borderleaf2.xml \
examples/evpn-vxlan-dc/dc1/dc1-leaf1.xml \
examples/evpn-vxlan-dc/dc1/dc1-leaf2.xml \
examples/evpn-vxlan-dc/dc1/dc1-leaf3.xml \
examples/evpn-vxlan-dc/dc1/dc1-spine1.xml \
examples/evpn-vxlan-dc/dc1/dc1-spine2.xml \
examples/evpn-vxlan-dc/dc2/dc2-spine1.xml \
examples/evpn-vxlan-dc/dc2/dc2-spine2.xml \
-t vqfx-evpn-vxlan- Generate a second role (SRX firewalls) from YANG + XML
jtaf-yang2ansible \
-p examples/yang/18.2/18.2R3/common \
examples/yang/18.2/18.2R3/junos-es/conf/*.yang \
-x examples/evpn-vxlan-dc/dc1/dc1-*firewall*.xml examples/evpn-vxlan-dc/dc2/dc2-*firewall*.xml \
-t srx-ansible-role- Create a separate provisioning playbook project
Create a separate directory for your operator playbook:
mkdir -p ansible-evpn-vxlan-deployCreate ansible-evpn-vxlan-deploy/ansible.cfg:
[defaults]
roles_path = ../ansible-provider-junos-vqfx-evpn-vxlan/roles:../ansible-provider-junos-srx-ansible-role/roles
host_key_checking = FalseFor first-time Ansible users: roles_path tells Ansible where custom roles live. In this workflow, both generated roles are referenced, while your operator playbook stays in ansible-evpn-vxlan-deploy/.
- Create
grouping.hostsfiles for the inventory hierarchy
jtaf-xml2yaml now requires a grouping definition. The section names in these files become your generated inventory groups and group_vars/<group>/all.yaml directories.
Create ansible-evpn-vxlan-deploy/qfx.grouping.hosts:
[all]
dc1-borderleaf1
dc1-borderleaf2
dc1-leaf1
dc1-leaf2
dc1-leaf3
dc1-spine1
dc1-spine2
dc2-spine1
dc2-spine2
[borderleaf]
dc1-borderleaf1
dc1-borderleaf2
[leaf]
dc1-leaf1
dc1-leaf2
dc1-leaf3
[spine]
dc1-spine1
dc1-spine2
dc2-spine1
dc2-spine2Create ansible-evpn-vxlan-deploy/firewall.grouping.hosts:
[all]
dc1-firewall1
dc1-firewall2
dc2-firewall1
dc2-firewall2
[firewall]
dc1-firewall1
dc1-firewall2
dc2-firewall1
dc2-firewall2- Generate inventory + vars for the first role into the playbook project
Use the same -d directory for every jtaf-xml2yaml run that should share one inventory, group_vars, host_vars, and payload cache.
jtaf-xml2yaml \
-x examples/evpn-vxlan-dc/dc1/*{spine,leaf}*.xml examples/evpn-vxlan-dc/dc2/*spine*.xml \
-j ansible-provider-junos-vqfx-evpn-vxlan/trimmed_schema.json \
-d ansible-evpn-vxlan-deploy \
--hosts-file ansible-evpn-vxlan-deploy/inventory.ini \
--grouping-hosts-file ansible-evpn-vxlan-deploy/qfx.grouping.hosts- Generate inventory + vars for the second role into the same playbook project
jtaf-xml2yaml \
-x examples/evpn-vxlan-dc/dc1/dc1-*firewall*.xml examples/evpn-vxlan-dc/dc2/dc2-*firewall*.xml \
-j ansible-provider-junos-srx-ansible-role/trimmed_schema.json \
-d ansible-evpn-vxlan-deploy \
--hosts-file ansible-evpn-vxlan-deploy/inventory.ini \
--grouping-hosts-file ansible-evpn-vxlan-deploy/firewall.grouping.hostsAfter both runs, your playbook project should contain at least:
inventory.inigroup_vars/all.yamlgroup_vars/borderleaf/all.yamlgroup_vars/leaf/all.yamlgroup_vars/spine/all.yamlgroup_vars/firewall/all.yamlhost_vars/<hostname>.yaml
Update ansible-evpn-vxlan-deploy/inventory.ini with reachable management addresses while keeping the generated group names:
[borderleaf]
dc1-borderleaf1 ansible_host=192.0.2.101 ansible_port=830
dc1-borderleaf2 ansible_host=192.0.2.102 ansible_port=830
[leaf]
dc1-leaf1 ansible_host=192.0.2.11 ansible_port=830
dc1-leaf2 ansible_host=192.0.2.12 ansible_port=830
dc1-leaf3 ansible_host=192.0.2.13 ansible_port=830
[spine]
dc1-spine1 ansible_host=192.0.2.21 ansible_port=830
dc1-spine2 ansible_host=192.0.2.22 ansible_port=830
dc2-spine1 ansible_host=192.0.2.31 ansible_port=830
dc2-spine2 ansible_host=192.0.2.32 ansible_port=830
[firewall]
dc1-firewall1 ansible_host=192.0.2.201 ansible_port=830
dc1-firewall2 ansible_host=192.0.2.202 ansible_port=830
dc2-firewall1 ansible_host=192.0.2.203 ansible_port=830
dc2-firewall2 ansible_host=192.0.2.204 ansible_port=830Notes on repeated runs:
- Reuse the same
-ddirectory whenever you want one merged inventory and var tree. group_vars/all.yamlcontains values shared across every tracked host in that output directory.group_vars/<group>/all.yamlcontains per-group deltas for groups declared in the relevantgrouping.hostsfile.- Host-specific differences remain in
host_vars/<hostname>.yaml.
- Create a playbook that renders, previews diff, pushes, and verifies
Create ansible-evpn-vxlan-deploy/site.yml:
---
- name: Render XML from generated QFX role
hosts: borderleaf:leaf:spine
gather_facts: false
connection: local
vars:
tmp_dir: ../ansible-provider-junos-vqfx-evpn-vxlan/configs
roles:
- role: vqfx-evpn-vxlan_role
delegate_to: localhost
- name: Render XML from generated SRX role
hosts: firewall
gather_facts: false
connection: local
vars:
tmp_dir: ../ansible-provider-junos-srx-ansible-role/configs
roles:
- role: srx-ansible-role_role
delegate_to: localhost
- name: Preview and apply rendered XML on QFX devices
hosts: borderleaf:leaf:spine
gather_facts: false
connection: local
vars:
netconf_user: "{{ lookup('env', 'NETCONF_USERNAME') }}"
netconf_pass: "{{ lookup('env', 'NETCONF_PASSWORD') }}"
tmp_dir: ../ansible-provider-junos-vqfx-evpn-vxlan/configs
tasks:
- name: Preview candidate diff without committing
juniper.device.config:
host: "{{ ansible_host | default(inventory_hostname) }}"
port: "{{ ansible_port | default(830) }}"
user: "{{ netconf_user }}"
passwd: "{{ netconf_pass }}"
load: replace
src: "{{ tmp_dir }}/{{ inventory_hostname }}.xml"
check: true
commit: false
diff: true
register: preview_result
- name: Print diff lines from preview
ansible.builtin.debug:
var: preview_result.diff_lines
- name: Load and commit with commit-confirm safeguard
juniper.device.config:
host: "{{ ansible_host | default(inventory_hostname) }}"
port: "{{ ansible_port | default(830) }}"
user: "{{ netconf_user }}"
passwd: "{{ netconf_pass }}"
load: replace
src: "{{ tmp_dir }}/{{ inventory_hostname }}.xml"
confirmed: 5
check_commit_wait: 5
comment: "Apply EVPN-VXLAN config generated by JTAF"
register: apply_result
- name: Verify module-reported apply result
ansible.builtin.assert:
that:
- not (apply_result.failed | default(false))
- apply_result.msg is defined
fail_msg: "Config apply failed on {{ inventory_hostname }}"
- name: Confirm previously confirmed commit
juniper.device.config:
host: "{{ ansible_host | default(inventory_hostname) }}"
port: "{{ ansible_port | default(830) }}"
user: "{{ netconf_user }}"
passwd: "{{ netconf_pass }}"
check: true
commit: false
diff: false
register: confirm_result
- name: Print apply and confirm summaries
ansible.builtin.debug:
msg:
- "apply={{ apply_result.msg | default('no message') }}"
- "confirm={{ confirm_result.msg | default('no message') }}"
- name: Preview and apply rendered XML on SRX devices
hosts: firewall
gather_facts: false
connection: local
vars:
netconf_user: "{{ lookup('env', 'NETCONF_USERNAME') }}"
netconf_pass: "{{ lookup('env', 'NETCONF_PASSWORD') }}"
tmp_dir: ../ansible-provider-junos-srx-ansible-role/configs
tasks:
- name: Preview candidate diff without committing (SRX)
juniper.device.config:
host: "{{ ansible_host | default(inventory_hostname) }}"
port: "{{ ansible_port | default(830) }}"
user: "{{ netconf_user }}"
passwd: "{{ netconf_pass }}"
load: replace
src: "{{ tmp_dir }}/{{ inventory_hostname }}.xml"
check: true
commit: false
diff: true
register: preview_result_srx
- name: Load and commit with commit-confirm safeguard (SRX)
juniper.device.config:
host: "{{ ansible_host | default(inventory_hostname) }}"
port: "{{ ansible_port | default(830) }}"
user: "{{ netconf_user }}"
passwd: "{{ netconf_pass }}"
load: replace
src: "{{ tmp_dir }}/{{ inventory_hostname }}.xml"
confirmed: 5
check_commit_wait: 5
comment: "Apply SRX config generated by JTAF"
register: apply_result_srx- Run the playbook
cd ansible-evpn-vxlan-deploy
# NETCONF credentials used by the playbook
export NETCONF_USERNAME='<junos-netconf-user>'
export NETCONF_PASSWORD='<junos-netconf-password>'
ansible-playbook -i inventory.ini site.yml- What to check in output
- The preview tasks should show diffs for the generated switch groups (
borderleaf,leaf, andspine) and forfirewall. - The apply tasks should succeed for both generated roles.
- Inventory and vars should remain merged across repeated
jtaf-xml2yamlruns.
At this point you have completed render -> preview diff -> push -> plugin-level verification using both generated roles (QFX and SRX).