Why the DNS server Implementation is Important
I'm going to get slightly ahead of myself again, in that I'm going to explore a configuration aspect the importance of which is not yet clear. This time, it's DNS. But the reason WHY it’s so important is probably not obvious (it wasn't for me at first).
There are a number of ways to manage the SLES hosts you deploy: SALT, SuSE Manager, Puppet, Chef, Ansible, etc. In the environment I constructed, for various reasons, I selected the "Traditional" approach, registering hosts to the SUMA and using the non-SALT management functions of the platform. By no means should that be interpreted to suggest that any other management platform cannot do just as well. However, in conjunction with the "Build" network, it did create certain challenges.
As readers know, once the OS has been laid down via AutoYaST, the bootstrap script (another future topic) for the "Traditional" environment will "register" the host during the Firstboot process (that is, the first time the host is booted following installation).
If the "Build network” is truly isolated, then your normal DNS services are not available within. This poses some issues.
It is possible to refer to the SUMA by the IP address it has in the "Build network”. The trouble with that is when the host is registered, the mgr-daemon configuration will contain that IP - the tools will never attempt to resolve the name, and once the host no longer has any connectivity to the "Build network”, the tools will lose all contact with SUMA.
If the build process refers to the SUMA by its hostname, then there are two problems: first, name resolution services must be offered in the "Build network”; second, if the DNS request is merely relayed to your organization's usual nameservers, then the reply will not be the IP address for the "Build network” of the SUMA, but rather the "public" or "management" IP - and the host-under-construction will again be unable to resolve the name, and once the host no longer has any connectivity to the "Build network”, the tools will lose all contact to SUMA.
You may be able to address this situation via routing. In that case, ignore (in whole or part, as appropriate to your network environment) my previous advice about making sure the SUMA's services are available in the "Build network". Your solution is going to be some variation on the theme I've presented so far.
But let's say you do need to provide DNS services within the "Build network" to address the issues I've described. A possible solution in that situation uses DNS Views (so the reply to a relayed request differs from that given to a direct request). Again, if that's the best solution for you, use it.
In my environment, I designed an approach that I find easier to implement and maintain. On the SUMA, I run a DNS server that has no function other than to resolve the hostname of the SUMA to the IP address it has in the "Build network”. All other resolution requests are forwarded to my organization’s main nameservers and resolved normally.
The configuration isn’t the usual for BIND, and not immediately obvious unless you know BIND 9 configuration options intimately. Here’s the configuration that worked for me:
# /etc/sycsconfig/named
#######################################
# SLES Configuration for BIND/DNS
#######################################
# This file defines how the OS generally
# manages the BIND service
#
# IMPORTANT! It does NOT define the
# configuration of named !!
#
# The daemon itself is configured via
# /etc/named.conf
#######################################
# Change Log (Reverse Chronology)
# Who When______ What__________________
# dxb 2019-04-18 Initial creation
######################################## Run BIND server in a chroot jail
# (/var/lib/named)?
# Valid Values:
# no = do NOT run named in a Jail
# yes = (DEFAULT) Jail named
# If set to "yes", then each time
# named is (re)started,
# /etc/named.conf is copied to
# /var/lib/named# Also, other configuration files
# (/etc/named.conf.include,
# /etc/rndc.key, and all files listed
# in NAMED_CONF_INCLUDE_FILES) are
# also copied to the chroot jail
# The PID file will be
# /var/lib/named/var/run/named/named.pid
NAMED_RUN_CHROOTED="yes"
# Additional arguments for the named
# server command-line
# Specified as a string of valid options
# Default: Empty string# Examples:
# "-n 2" = use two CPUs (helpful if
# named is unable to determine
# the number of available CPUs)
# NOTE: If NAMED_RUN_CHROOTED="yes",
# then "-t /var/lib/named/var"
# is added
NAMED_ARGS=""
# Additional BIND configuration files
# Specified as a space-separated string
# of either files or directories
# (omit trailing /); if relative path
# is provided, it is relative to
# /etc/named.d/
# Default: Empty string# Example: "/etc/bind-dhcp.key ldap.dump"
# NOTE: /etc/named.conf, any files
# mentioned as includes in that file,
# and /etc/rndc.key are all always
# copied regardless of this setting
NAMED_CONF_INCLUDE_FILES=""
# Define other programs that are
# executed every time named is
# (re)started
# Specified as a space-separated string
# of files; relative paths to begin
# with /usr/share/bind/
# Default: "createNamedConfInclude"
NAMED_INITIALIZE_SCRIPTS="createNamedConfInclude"
# End of /etc/sycsconfig/named
##############################
and the companion file:
# /etc/named.conf
# NOTE: This file is COPIED to the
# chroot jail when BIND starts,
# so be careful that you edit
# the correct copy
#######################################
# Change Log (Reverse Chronology)
# Who When______ What__________________
# dxb 2019-04-18 Initial creation
#######################################
# Mostly, defaults are used
# Exceptions are "listen-on",
# "listen-on-v6" and the addition of
# a Zone for "domain.tld" - the Zone
# file should exist as
# /var/lib/named/<domain>.<tld>.zone
#######################################
options {
# Defines BIND's working directory
directory "/var/lib/named";
# Enable DNSSEC & validation?
# Valid Values:
# no = Disable
# yes = Enable
# For dnssec-validation only
# auto = (DEFAULT) let named decide
dnssec-enable no;
dnssec-validation auto;
# Set the keys directory
managed-keys-directory "/var/lib/named/dyn/";
# Write dump and statistics file to
# the log subdirectory
# The pathnames are relative to
# the chroot jail
dump-file "/var/log/named_dump.db";
statistics-file "/var/log/named.stats";
# The forwarders record contains
# a list of servers to which
# queries should be forwarded;
# up to three IPs may be listed
forwarders { 10.0.1.200; 10.0.2.200; };
# Should BIND forward instead of
# attempting to resolve locally?
# Valid Values:
# first = Forward a request
# first, then attempt to
# resolve locally
# no = Never forward
# yes = (DEFAULT) Always forward
# (after attempting local
# resolution)
# forward first;
# Define a list of local IPv4
# network interfaces to listen on;
# optionally the port can be
# specified
# Valid Values: 'any', 'none' or a
# list of addresses
# Default: 'any' (default port is 53)
listen-on port 53 { 192.168.1.10; };
# Define a list of local IPv4
# network interfaces to listen on;
# Valid Values: 'any', 'none' or a
# list of addresses
# Default: 'any'
listen-on-v6 { none; };
# Define a list of IPv4 network
# addresses allowed to submit
# queries, and recursive queries
# Valid Values: 'any', 'none' or
# a list of addresses
# Default: 'any'
allow-query { any; };
allow-recursion { any; };
# The allow-query record contains
# a list of networks or IP
# addresses from which queries
# are allowed or denied
# allow-recursion does the same
# for recursive queries
# Default is to allow
# queries/recursion from anywhere
# Should other name servers get
# notified if a Zone changes
# Valid Values:
# yes (DEFAULT)
# no
notify no;
disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
};
# Logging is best-handled on a
# per-site basis
# The following FOUR zone definitions
# should not need any modification
zone "." in {
type hint;
file "root.hint";
};
# Zone for localhost
zone "localhost" in {
type master;
file "localhost.zone";
};
# Reverse-lookup zone for IPv4
zone "0.0.127.in-addr.arpa" in {
type master;
file "127.0.0.zone";
};
# Reverse-lookup for IPv6
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
type master;
file "127.0.0.zone";
};
# Finally, the Zone for the SUMA
# The Zone file should exist as
# /var/lib/named/<FQDN of your SUMA>.<your Domain>.<tld>
zone "<FQDN of your SUMA>.<your Domain>.<tld>" in {
type master;
file "<FQDN of your SUMA>.<your Domain>.<tld>.zone";
};
# Include the meta include file generated
# by createNamedConfInclude. This
# includes all files as configured in
# NAMED_CONF_INCLUDE_FILES from
# /etc/sysconfig/named
include "/etc/named.conf.include";
and finally the Zone file:
; <FQDN of your SUMA>.<your Domain>.<tld>.zone
; Zone file to allow hosts in the Build
; network to resolve the SuSE Manager
; server host name
; Set a TTL for this Zone to 1 day
$TTL 1D
$ORIGIN <FQDN of your SUMA>.<your Domain>.<tld>.
@ IN SOA @ ns (
2019041801; serial #-increment for changes
2H ; refresh
4M ; retry
1H ; expiry
1H ) ; minimum
; Apex record for the Zone
@ IN A 192.168.1.10
; Name server info for zone
@ IN NS ns
ns IN A 192.168.1.10
The goal of this DNS configuration is to smooth the process of registering a newly-built host to the SUMA for the "Traditional" management paradigm. It may or may not be useful for other management scenarios.
Up Ahead, Your Next Stop, GRUB
Everything to this point has been about networking. However, a "Build network" is not itself the build process. The goal of the design and configurations presented so far is to enable either the VM or LPAR to reach the point where the GRUB menu appears and can be used to initiate the OS build.
A surprising difficulty here is Cobbler. It's a component of SUMA, and it's very eager to be helpful. Among other things, it wants to generate the GRUB configuration files.
Unfortunately, Cobbler is a native of the World of Intel; it has limited understanding of PowerVM-land. When your SUMA is providing build services for multiple architectures, Cobbler can become confused when it references your Autoinstallation Distributions and generates the GRUB menu files - it will generate menus that mix both platforms, which prevents automatically launching a default menu selection
Fortunately, Cobbler has a built-in mechanism to help you overcome the problem. For the design I constructed, I placed the following scripts in /var/lib/cobbler/triggers/sync/post/ on the SUMA:
#!/bin/sh
# /var/lib/cobbler/triggers/sync/post/grub-ppc.sh
#################################################
# This script is run by Cobbler every time a Sync
# is performed
# It replaces the contents of
# /srv/tftpboot/boot/grub2 using
# /root/ppc_netboot/grub2
#################################################
# Change Log (Reverse Chronology)
# Who When______ What____________________________
# dxb 2019-01-09 Initial creation
#################################################
# Copy PowerPC installation files/directories,
# including a PPC-specific grub.cfg
/usr/bin/cp -r /root/ppc_netboot/grub2 /srv/tftpboot/boot/
# End of /var/lib/cobbler/triggers/sync/post/grub-ppc.sh
#########################################################
and also
#!/bin/sh
# /var/lib/cobbler/triggers/sync/post/grub-x86.sh
#################################################
# This script is run by Cobbler every time a Sync
# is performed
# It replaces /srv/tftpboot/boot/grub/grub.cfg
# with /root/x86_grub/grub.cfg and replaces
# /srv/tftpboot/grub/grub.efi with
# /<Path To ISO Mount Point>/SLES15GA_X86_64/EFI/BOOT/grub.efi
#################################################
# Change Log (Reverse Chronology)
# Who When______ What____________________________
# dxb 2019-01-09 Initial creation
#################################################
# Copy standard grub.cfg (for x86) over whatever
# Cobbler produces
/usr/bin/cp /root/vmware_grub/grub.cfg /srv/tftpboot/grub/
# Additionally, copy "grub.efi" from the install
# media
/usr/bin/cp /<Path To ISO Mount Point>/SLES15GA_X86_64/EFI/BOOT/grub.efi /srv/tftpboot/grub/grub.efi
# End of /var/lib/cobbler/triggers/sync/post/grub-x86.sh
########################################################
The scripts replace whenever Cobbler creates with the GRUB menus that are appropriate to the environment I constructed. Among other things, this allows me to define the GRUB_DEFAULT and GRUB_TIMEOUT variables in the menu, enabling automation of another step in the OS build process. One can, of course, piggy-back many other processes to Cobbler - update DNS or other host inventories, launch SALT Fomulas or Ansible Playbooks, etc.
In future articles, I'll take you into AutoYaST