In Part-1 we learned about salt basics and its installation. In this part we will focus on the working of salt and also talk about proxy-minion for Juniper devices.
To start let’s begin with defining the master configuration on the master01 host.
Please use editor of your choice (like vim or nano) to edit file /etc/salt/master and add following two entries
root@master01:~# cat /etc/salt/master
interface: 0.0.0.0
auto_accept: True
The interface with all zeros means that the master will listen for minion on all available and active interfaces. It is obvious, that it is possible to restrict the master to minion communication on a specific interface also by defining the IP address of that that specific interface.
As explained in the Part-1, the master and minion communication are secured and they exchange keys. The entry “auto_accept: True” will accept the keys from minion(s) as and when they are started since this is a controlled and demo environment. In practice we keep it as “False” so that we accept the minion’s key manually and no unauthorized minion can connect to the master
On the minion we also have two entries in the /etc/salt/minion file which are as below
root@minion01:~# cat /etc/salt/minion
master: 192.168.122.2
id: minion01
Master defines the IP address of the master and id is the unique identification of this minion. Master start Debug messages. Notice the authentication request from minion01.
root@master01:~# salt-master -l debug
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Configuration file path: /etc/salt/master
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO ] Setting up the Salt Master
[INFO ] Generating master keys: /etc/salt/pki/master
[INFO ] Preparing the root key for local communication
[PROFILE ] Beginning pwd.getpwall() call in masterapi access_keys function
[PROFILE ] End pwd.getpwall() call in masterapi access_keys function
[DEBUG ] Created pidfile: /var/run/salt-master.pid
[INFO ] Starting up the Salt Master
[DEBUG ] LazyLoaded roots.envs
[DEBUG ] Could not LazyLoad roots.init: 'roots.init' is not available.
[INFO ] salt-master is starting as user 'root'
[INFO ] Current values for max open files soft/hard setting: 1024/1048576
[INFO ] Raising max open files value to 100000
[INFO ] New values for max open files soft/hard values: 100000/1048576
[INFO ] Creating master process manager
[INFO ] Creating master publisher process
[DEBUG ] Started 'salt.transport.zeromq.._publish_daemon' with pid 18527
[INFO ] Creating master event publisher process
[INFO ] Starting the Salt Publisher on tcp://0.0.0.0:4505
[INFO ] Starting the Salt Puller on ipc:///var/run/salt/master/publish_pull.ipc
[DEBUG ] Started 'salt.utils.event.EventPublisher' with pid 18530
[INFO ] Creating master maintenance process
[DEBUG ] Started 'salt.master.Maintenance' with pid 18531
[INFO ] Creating master request server process
[DEBUG ] Started 'ReqServer' with pid 18532
[ERROR ] Unable to load SSDP: asynchronous IO is not available.
[ERROR ] You are using Python 2, please install "trollius" module to enable SSDP discovery.
[DEBUG ] Process Manager starting!
[DEBUG ] Started 'salt.transport.zeromq..zmq_device' with pid 18533
[DEBUG ] Initializing new Schedule
[INFO ] Setting up the master communication server
[INFO ] Authentication request from minion01
[INFO ] Authentication accepted from minion01
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event: tag = salt/auth; data = {u'id': 'minion01', '_stamp': '2018-04-21T09:20:42.794175', u'result': True, u'pub': '-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAupxG1B1QBwxNXX4bhiyK\nN/WL5KRoMQFnwuNYGms1C1PcMthzQ/eCPZW91RQYwTuvPhfUr79lpRXz4DltGSei\nR4RBeGE/pk2g8obx9tQlBhChm3dzZk68S0DvCwnhH76ZKfR5XGuTFCwIH2Uh72/p\nmEET7cYuM8bKNx+nWWzeKhs/rYwuxcJAjwuQZZeccgsWXvS69VP30LVZHCqOM5ZA\n8SleJd8yRyZ6PvLOfQtthJasc7FmWoTqkyGNaPaZSWefe9/FNXreiAk+YXoXIZOC\nNRZQMURHG8L1jot7mUlhSxhjXaCOFCbOwaOhcwHtmUcMfbnQ9Sz0/xh1cFxxRMaH\nSQIDAQAB\n-----END PUBLIC KEY-----', u'act': u'accept'}
[DEBUG ] Determining pillar cache
[DEBUG ] LazyLoaded jinja.render
[DEBUG ] LazyLoaded yaml.render
[DEBUG ] LazyLoaded localfs.init_kwargs
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event: tag = minion/refresh/minion01; data = {u'Minion data cache refresh': 'minion01', '_stamp': '2018-04-21T09:20:43.006560'}
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event: tag = minion_start; data = {'_stamp': '2018-04-21T09:20:43.478571', 'pretag': None, 'cmd': '_minion_event', 'tag': 'minion_start', 'data': 'Minion minion01 started at Sat Apr 21 14:50:43 2018', 'id': 'minion01'}
[DEBUG ] Sending event: tag = salt/minion/minion01/start; data = {'_stamp': '2018-04-21T09:20:43.510991', 'pretag': None, 'cmd': '_minion_event', 'tag': 'salt/minion/minion01/start', 'data': 'Minion minion01 started at Sat Apr 21 14:50:43 2018', 'id': 'minion01'}
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG ] Found minion id from generate_minion_id(): master01
[DEBUG ] Grains refresh requested. Refreshing grains.
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Please install 'virt-what' to improve results of the 'virtual' grain.
[DEBUG ] LazyLoaded local_cache.clean_old_jobs
[DEBUG ] LazyLoaded localfs.list_tokens
[DEBUG ] Updating roots fileserver cache
[DEBUG ] This salt-master instance has accepted 1 minion keys.
Similarly minion can be started by following command in debug mode
root@minion01:~# salt-minion - l debug
Now lets run some execution commands from master to the minion. Please note while executing the command we need to specify the minion name. We can also define regex like ‘*’ means all minions and ‘min*’ means all minions whose name starts with 'min'. Notice the use of single quotes (they are mandatory).
Let’s execute something from salt-master
root@master01:~# salt '*' test.ping
minion01:
True
root@master01:~#
Now lets check the grains (the static information about minion – as explained in Part-1)
root@master01:~# salt 'minion01' grains.items
minion01:
----------
SSDs:
biosreleasedate:
01/01/2011
biosversion:
0.5.1
cpu_flags:
- fpu
- de
- pse
- tsc
- msr
- pae
- mce
- cx8
- apic
- sep
- mtrr
- pge
- mca
- cmov
- pse36
- clflush
- mmx
- fxsr
- sse
- sse2
- syscall
- nx
- lm
- rep_good
- nopl
- pni
- cx16
- hypervisor
- lahf_lm
- kaiser
cpu_model:
QEMU Virtual CPU version 1.5.3
cpuarch:
x86_64
disks:
- sda
- sr0
- loop0
- loop1
- loop2
- loop3
- loop4
- loop5
- loop6
- loop7
dns:
----------
domain:
ip4_nameservers:
- 192.168.122.1
- 10.233.6.81
ip6_nameservers:
nameservers:
- 192.168.122.1
- 10.233.6.81
options:
search:
sortlist:
domain:
fc_wwn:
fqdn:
minion01
fqdn_ip4:
fqdn_ip6:
fqdns:
gid:
0
gpus:
|_
----------
model:
GD 5446
vendor:
unknown
groupname:
root
host:
minion01
hwaddr_interfaces:
----------
ens3:
52:54:00:00:08:01
ens4:
52:54:00:00:08:03
lo:
00:00:00:00:00:00
id:
minion01
init:
systemd
ip4_gw:
40.1.1.2
ip4_interfaces:
----------
ens3:
- 192.168.122.3
ens4:
- 40.1.1.17
lo:
- 127.0.0.1
ip6_gw:
False
ip6_interfaces:
----------
ens3:
- fe80::5054:ff:fe00:801
ens4:
- fe80::5054:ff:fe00:803
lo:
- ::1
ip_gw:
True
ip_interfaces:
----------
ens3:
- 192.168.122.3
- fe80::5054:ff:fe00:801
ens4:
- 40.1.1.17
- fe80::5054:ff:fe00:803
lo:
- 127.0.0.1
- ::1
ipv4:
- 40.1.1.17
- 127.0.0.1
- 192.168.122.3
ipv6:
- ::1
- fe80::5054:ff:fe00:801
- fe80::5054:ff:fe00:803
iscsi_iqn:
- iqn.1993-08.org.debian:01:2bee19278ac0
kernel:
Linux
kernelrelease:
4.4.0-112-generic
kernelversion:
#135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018
locale_info:
----------
defaultencoding:
UTF-8
defaultlanguage:
en_US
detectedencoding:
UTF-8
localhost:
minion01
lsb_distrib_codename:
xenial
lsb_distrib_description:
Ubuntu 16.04.3 LTS
lsb_distrib_id:
Ubuntu
lsb_distrib_release:
16.04
machine_id:
fb07e936a29d43748b5f9090ec7e9cd3
manufacturer:
Red Hat
master:
192.168.122.2
mdadm:
mem_total:
2000
nodename:
minion01
num_cpus:
2
num_gpus:
1
os:
Ubuntu
os_family:
Debian
osarch:
amd64
oscodename:
xenial
osfinger:
Ubuntu-16.04
osfullname:
Ubuntu
osmajorrelease:
16
osrelease:
16.04
osrelease_info:
- 16
- 4
path:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
pid:
17372
productname:
KVM
ps:
ps -efHww
pythonexecutable:
/usr/bin/python
pythonpath:
- /usr/local/bin
- /usr/lib/python2.7
- /usr/lib/python2.7/plat-x86_64-linux-gnu
- /usr/lib/python2.7/lib-tk
- /usr/lib/python2.7/lib-old
- /usr/lib/python2.7/lib-dynload
- /usr/local/lib/python2.7/dist-packages
- /usr/lib/python2.7/dist-packages
pythonversion:
- 2
- 7
- 12
- final
- 0
saltpath:
/usr/local/lib/python2.7/dist-packages/salt
saltversion:
2017.7.0-693-ga5f96e6
saltversioninfo:
- 2017
- 7
- 0
- 0
serialnumber:
server_id:
1310197239
shell:
/bin/bash
swap_total:
0
systemd:
----------
features:
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN
version:
229
uid:
0
username:
root
uuid:
fb07e936-a29d-4374-8b5f-9090ec7e9cd3
virtual:
kvm
zfs_support:
False
zmqversion:
4.1.6
root@master01:~#
As you can see we have a huge list of things collected. Let’s run some command remotely
root@master01:~# salt 'minion01' cmd.run 'lsb_release -a'
minion01:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
root@master01:~#
Salt also maintains a file server to distribute files from master to the minion. For security reasons, the minions cant have access to all the files on the master, instead of that we define one specific folder on the master configuration which minion can have access to. We can copy files from master to minion or vice-versa in this folder only.
The salt master config files now looks like this
root@master01:~# cat /etc/salt/master
interface: 0.0.0.0
auto_accept: True
file_roots:
base:
- /opt/test_folder
root@master01:~#
We have now defined the file_roots in the master config files which means we can now transfer contents of folder /opt/test_folder/ from master to minion or vice-versa. Let’s see how it is done
root@master01:~# salt 'minion01' cp.get_file 'salt://salt-testfile.txt' '/opt/test_folder/'
minion01:
/opt/test_folder/salt-testfile.txt
root@master01:~#
Lets check on the minion
root@minion01:~# ll /opt/test_folder/
total 12
drwxr-xr-x 2 root root 4096 Apr 21 15:47 ./
drwxr-xr-x 3 root root 4096 Apr 21 15:37 ../
-rw-r--r-- 1 root root 47 Apr 21 15:47 salt-testfile.txt
root@minion01:~#
Working with Junos proxy:
Proxy-minion is a very important feature that enables controlling devices that can’t run a standard salt-minion. As mentioned in the Part-1 the same minion will be acting as the proxy-minion for Junos devices in between the master and the network devices.
Junos proxy provides the necessary plumbing that allows device discovery, control, status, remote execution etc on the Juniper routers and switches.
Please note that for every Junos device we need a different proxy process. We can have multiple proxy-minion process running on the same minion device like the way in our example here.
Before we begin, since we need to talk to Juniper devices now, we need to install three more python libraries on the master and the minions. These libraries are:
1) junos-eznc: The Juniper PyEz library.
2) jxmlease: a Python module for converting XML to intelligent Python data structures, and converting Python data structures to XML.
3) yamlordereddictloader: module providing a loader and a dumper for PyYAML allowing to keep items order when loading a file.
root@master01:~# pip list | grep eznc
junos-eznc (2.1.7)
root@master01:~#
root@master01:~# pip list | grep ease
jxmlease (1.0.1)
root@master01:~#
root@master01:~# pip list | grep yaml
yamlordereddictloader (0.4.0)
root@master01:~#
We will use Juniper virtual QFX as our network devices. This example will work in exactly the same way across all Junos based devices, right from smallest EX2200-C till the biggest PTX10K.
Following is the topology for the virtual Network. This is example of a small Data Center with a typical spine and leaf architecture.
Please ensure that netconf (over ssh) is enabled on the Juniper devices. Showing example of one spine and one leaf along with the Junos version
lab@spine01> show configuration system services
ssh {
root-login allow;
}
netconf {
ssh;
}
{master:0}
lab@spine01> show version brief
fpc0:
--------------------------------------------------------------------------
Hostname: spine01
Model: vqfx-10000
Junos: 17.4R1.16 limited
JUNOS Base OS boot [17.4R1.16]
JUNOS Base OS Software Suite [17.4R1.16]
JUNOS Crypto Software Suite [17.4R1.16]
JUNOS Online Documentation [17.4R1.16]
JUNOS Kernel Software Suite [17.4R1.16]
JUNOS Packet Forwarding Engine Support (qfx-10-f) [17.4R1.16]
JUNOS Routing Software Suite [17.4R1.16]
JUNOS jsd [i386-17.4R1.16-jet-1]
JUNOS SDN Software Suite [17.4R1.16]
JUNOS Enterprise Software Suite [17.4R1.16]
JUNOS Web Management [17.4R1.16]
JUNOS py-base-i386 [17.4R1.16]
JUNOS py-extensions-i386 [17.4R1.16]
**
lab@leaf02> show configuration system services
ssh {
root-login allow;
}
netconf {
ssh;
}
{master:0}
lab@leaf02> show version brief
fpc0:
--------------------------------------------------------------------------
Hostname: leaf02
Model: vqfx-10000
Junos: 17.4R1.16 limited
JUNOS Base OS boot [17.4R1.16]
JUNOS Base OS Software Suite [17.4R1.16]
JUNOS Crypto Software Suite [17.4R1.16]
JUNOS Online Documentation [17.4R1.16]
JUNOS Kernel Software Suite [17.4R1.16]
JUNOS Packet Forwarding Engine Support (qfx-10-f) [17.4R1.16]
JUNOS Routing Software Suite [17.4R1.16]
JUNOS jsd [i386-17.4R1.16-jet-1]
JUNOS SDN Software Suite [17.4R1.16]
JUNOS Enterprise Software Suite [17.4R1.16]
JUNOS Web Management [17.4R1.16]
JUNOS py-base-i386 [17.4R1.16]
JUNOS py-extensions-i386 [17.4R1.16]
{master:0}
lab@leaf02>
For master to run commands on Junos devices, we need to define following files in /srv/pillar/ folder
1) Pillar file for each Junos device
2) Top file for all the pillar files
Pillars are user defined variables which are distributed among minions. Pillars are useful for
1) High sensitive data
2) Minion Configuration
3) Variables
4) Arbitrary data
Note: The default location for the pillar files are /srv/pillar, however it can be changed in the master configuration file under 'pillar_roots' parameter
The top file is used to map what SLS modules get loaded onto what minions via the state system. We will be able to understand this in more detail in the example
The master config will not be changed, however since the minion will now be acting as proxy-minion we need to define the proxy configuration file in the minion system. This file will be called as proxy and will be defined under /etc/salt/ folder
root@minion01:~# ll /etc/salt/
total 24
drwxr-xr-x 4 root root 4096 Apr 21 18:28 ./
drwxr-xr-x 94 root root 4096 Apr 21 18:21 ../
-rw-r--r-- 1 root root 35 Apr 21 14:00 minion
drwxr-xr-x 2 root root 4096 Apr 21 14:49 minion.d/
drwxr-xr-x 3 root root 4096 Apr 21 14:49 pki/
-rw-r--r-- 1 root root 22 Apr 21 18:28 proxy
root@minion01:~#
root@minion01:~# cat /etc/salt/proxy
master: 192.168.122.2
For now that is all required to be done on the minion system.
On the master system, let’s see the various files present in the /srv/pillar folder
root@master01:~# ll /srv/pillar/
total 32
drwxr-xr-x 2 root root 4096 Apr 21 18:24 ./
drwxr-xr-x 3 root root 4096 Apr 21 18:17 ../
-rw-r--r-- 1 root root 76 Apr 21 18:24 leaf01.sls
-rw-r--r-- 1 root root 76 Apr 21 18:24 leaf02.sls
-rw-r--r-- 1 root root 76 Apr 21 18:24 leaf03.sls
-rw-r--r-- 1 root root 77 Apr 21 18:20 spine01.sls
-rw-r--r-- 1 root root 77 Apr 21 18:23 spine02.sls
-rw-r--r-- 1 root root 140 Apr 21 18:19 top.sls
root@master01:~#
The content of one of the pillar files. Please note that in the host field, we can also provide the IP address of the Junos device
root@master01:~# cat /srv/pillar/leaf01.sls
proxy:
proxytype: junos
host: leaf01
username: lab
password: q1w2e3
The contents of top file
root@master01:~# cat /srv/pillar/top.sls
base:
'spine01':
- spine01
'spine02':
- spine02
'leaf01':
- leaf01
'leaf02':
- leaf02
'leaf03':
- leaf03
root@master01:~#
The above top file can be read as 'the category ‘base’ has minion ‘spine01’ for which data is stored in spine01 file'. Please note it is not required to have .sls extension to be defined here.
Once again, it is interesting to note that all configuration is being done on the master system. Lets start the master process, the minion and proxy-minion prosses. The ‘-d’ represents that the process will be starting in the daemon mode.
root@master01:~# salt-master -d
On minion we do the following. Note I need to start a proxy-minion process for each of the device I need to manage. It is also important to note that each proxy-minion process will consume about 50MB of the RAM of the system, so please ensure you have enough memory available on the minion
root@minion01:~# salt-proxy --proxyid=spine01 -d
root@minion01:~# salt-proxy --proxyid=spine02 -d
root@minion01:~# salt-proxy --proxyid=leaf01 -d
root@minion01:~# salt-proxy --proxyid=leaf02 -d
root@minion01:~# salt-proxy --proxyid=leaf03 -d
root@minion01:~# ps aux | grep salt
root 18053 5.5 4.3 1562028 89256 ? Sl 18:40 0:03 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=spine01 -d
root 18147 4.7 4.0 1562288 83024 ? Sl 18:40 0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=spine02 -d
root 18399 6.2 4.0 1562024 82924 ? Sl 18:40 0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=leaf01 -d
root 18479 7.0 4.0 1562024 82692 ? Sl 18:40 0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=leaf02 -d
root 18572 8.1 4.0 1562028 82812 ? Sl 18:40 0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=leaf03 -d
root 18921 5.0 2.5 832988 52716 ? Sl 18:41 0:01 /usr/bin/python /usr/local/bin/salt-minion -d
root 18922 0.0 1.6 291388 34704 ? S 18:41 0:00 /usr/bin/python /usr/local/bin/salt-minion -d
root 18995 0.0 0.0 12944 972 pts/0 S+ 18:41 0:00 grep --color=auto salt
root@minion01:~#
As mentioned earlier, the master and the minion is secured and they exchange keys. We can check which all keys the master has accepted.
root@master01:~# salt-key -L
Accepted Keys:
leaf01
leaf02
leaf03
minion01
spine01
spine02
Denied Keys:
Unaccepted Keys:
Rejected Keys:
Now we have stared the master and the proxy-minion, we can check the pillars which were loaded and the grains for the Junos devices
root@master01:~# salt '*' pillar.items
spine02:
----------
proxy:
----------
host:
spine02
password:
q1w2e3
proxytype:
junos
username:
lab
leaf03:
----------
proxy:
----------
host:
leaf03
password:
q1w2e3
proxytype:
junos
username:
lab
spine01:
----------
proxy:
----------
host:
spine01
password:
q1w2e3
proxytype:
junos
username:
lab
leaf01:
----------
proxy:
----------
host:
leaf01
password:
q1w2e3
proxytype:
junos
username:
lab
leaf02:
----------
proxy:
----------
host:
leaf02
password:
q1w2e3
proxytype:
junos
username:
lab
root@master01:~#
root@master01:~# salt '*' test.ping
leaf03:
True
leaf01:
True
spine01:
True
spine02:
True
leaf02:
True
root@master01:~#
Now let’s run some Junos specific commands
root@master01:~# salt 'spine01' 'junos.facts'
spine01:
----------
facts:
----------
2RE:
False
HOME:
/var/home/lab
RE0:
----------
last_reboot_reason:
Router rebooted after a normal shutdown.
mastership_state:
master
model:
QFX Routing Engine
status:
Absent
up_time:
22 hours, 25 minutes, 50 seconds
RE1:
None
RE_hw_mi:
False
current_re:
- master
- node
- fwdd
- member
- pfem
- re0
- fpc0
- localre
domain:
None
fqdn:
spine01
hostname:
spine01
hostname_info:
----------
fpc0:
spine01
ifd_style:
CLASSIC
junos_info:
----------
fpc0:
----------
object:
----------
build:
16
major:
- 17
- 4
minor:
1
type:
R
text:
17.4R1.16
master:
RE0
model:
VQFX-10000
model_info:
----------
fpc0:
VQFX-10000
personality:
None
re_info:
----------
default:
----------
0:
----------
last_reboot_reason:
Router rebooted after a normal shutdown.
mastership_state:
master
model:
QFX Routing Engine
status:
Absent
default:
----------
last_reboot_reason:
Router rebooted after a normal shutdown.
mastership_state:
master
model:
QFX Routing Engine
status:
Absent
re_master:
----------
default:
0
serialnumber:
62861517157
srx_cluster:
None
srx_cluster_id:
None
srx_cluster_redundancy_group:
None
switch_style:
VLAN_L2NG
vc_capable:
True
vc_fabric:
False
vc_master:
0
vc_mode:
Enabled
version:
17.4R1.16
version_RE0:
None
version_RE1:
None
version_info:
----------
build:
16
major:
- 17
- 4
minor:
1
type:
R
virtual:
None
out:
True
root@master01:~#
As you can see we have a huge list of things collected.
root@master01:~# salt 'leaf02*' 'junos.cli' 'show version brief'
leaf02:
----------
message:
fpc0:
--------------------------------------------------------------------------
Hostname: leaf02
Model: vqfx-10000
Junos: 17.4R1.16 limited
JUNOS Base OS boot [17.4R1.16]
JUNOS Base OS Software Suite [17.4R1.16]
JUNOS Crypto Software Suite [17.4R1.16]
JUNOS Online Documentation [17.4R1.16]
JUNOS Kernel Software Suite [17.4R1.16]
JUNOS Packet Forwarding Engine Support (qfx-10-f) [17.4R1.16]
JUNOS Routing Software Suite [17.4R1.16]
JUNOS jsd [i386-17.4R1.16-jet-1]
JUNOS SDN Software Suite [17.4R1.16]
JUNOS Enterprise Software Suite [17.4R1.16]
JUNOS Web Management [17.4R1.16]
JUNOS py-base-i386 [17.4R1.16]
JUNOS py-extensions-i386 [17.4R1.16]
out:
True
root@master01:~#
root@master01:~# salt 'spine*' 'junos.cli' 'show interface terse xe*'
spine01:
----------
message:
Interface Admin Link Proto Local Remote
xe-0/0/0 up up
xe-0/0/0.0 up up inet 1.0.0.2/30
xe-0/0/1 up up
xe-0/0/1.0 up up inet 2.0.0.2/30
xe-0/0/2 up up
xe-0/0/2.0 up up inet 3.0.0.2/30
xe-0/0/3 up up
xe-0/0/3.0 up up eth-switch
xe-0/0/4 up up
xe-0/0/4.16386 up up
xe-0/0/5 up up
xe-0/0/5.16386 up up
xe-0/0/6 up up
xe-0/0/6.16386 up up
xe-0/0/7 up up
xe-0/0/7.16386 up up
xe-0/0/8 up up
xe-0/0/8.16386 up up
xe-0/0/9 up up
xe-0/0/9.16386 up up
xe-0/0/10 up up
xe-0/0/10.16386 up up
xe-0/0/11 up up
xe-0/0/11.16386 up up
out:
True
spine02:
----------
message:
Interface Admin Link Proto Local Remote
xe-0/0/0 up up
xe-0/0/0.0 up up inet 1.0.0.6/30
xe-0/0/1 up up
xe-0/0/1.0 up up inet 2.0.0.6/30
xe-0/0/2 up up
xe-0/0/2.0 up up inet 3.0.0.6/30
xe-0/0/3 up up
xe-0/0/3.0 up up eth-switch
xe-0/0/4 up up
xe-0/0/4.16386 up up
xe-0/0/5 up up
xe-0/0/5.16386 up up
xe-0/0/6 up up
xe-0/0/6.16386 up up
xe-0/0/7 up up
xe-0/0/7.16386 up up
xe-0/0/8 up up
xe-0/0/8.16386 up up
xe-0/0/9 up up
xe-0/0/9.16386 up up
xe-0/0/10 up up
xe-0/0/10.16386 up up
xe-0/0/11 up up
xe-0/0/11.16386 up up
out:
True
root@master01:~#
This is what happens on the vQFX. Please note that it is actually doing an rpc call to the switch.
Apr 21 19:51:44 spine01 mgd[5019]: UI_CMDLINE_READ_LINE: User 'lab', command 'load-configuration rpc rpc commit-configuration check commit-configuration rpc rpc commit-configuration rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc command show interface terse xe* '
Apr 21 19:51:44 spine01 mgd[5019]: UI_NETCONF_CMD: User 'lab' used NETCONF client to run command 'get-interface-information level-extra=terse interface-name=xe*'
Now let’s change some configuration on the switch. Let’s change the hostname of the 'spine01' switch to 'spine0001'
root@master01:~# salt 'spine01' 'junos.set_hostname' 'hostname=spine0001' 'commit_change=True'
spine01:
----------
message:
Successfully changed hostname.
out:
True
root@master01:~#
**
{master:0}[edit]
lab@spine01#
*** messages ***
Apr 21 19:56:26 spine01 mgd[5019]: UI_COMMIT: User 'lab' requested 'commit' operation (comment: none)
Apr 21 19:56:26 spine01 mgd[5019]: UI_COMMIT_NO_MASTER_PASSWORD: No 'system master-password' set
Apr 21 19:56:27 spine01 mgd[5019]: UI_CHILD_EXITED: Child exited: PID 9609, status 7, command '/usr/sbin/mustd'
Apr 21 19:56:27 spine01 rpd[9633]: mpls_label_alloc_mode_new TRUE
Apr 21 19:56:27 spine01 l2cpd[9635]: ppmlite_var_init: iri instance = 36735
Apr 21 19:56:28 spine01 mgd[5019]: UI_COMMIT: User 'lab' requested 'commit' operation (comment: none)
Apr 21 19:56:28 spine01 mgd[5019]: UI_COMMIT_NO_MASTER_PASSWORD: No 'system master-password' set
Apr 21 19:56:28 spine01 mgd[5019]: UI_CHILD_EXITED: Child exited: PID 9642, status 7, command '/usr/sbin/mustd'
Apr 21 19:56:29 spine01 rpd[9666]: mpls_label_alloc_mode_new TRUE
Apr 21 19:56:29 spine01 l2cpd[9668]: ppmlite_var_init: iri instance = 36735
Apr 21 19:56:30 spine0001 mgd[5019]: UI_COMMIT_COMPLETED: commit complete
{master:0}[edit]
lab@spine0001#
And the hostname is changed.
In the next part we will play with some event driven capabilities of the salt system with Juniper devices
***End of Part 2***
No comments:
Post a Comment