Trouble?! Shoot your Cloud! TUT91720 Adam Spiers SUSE Cloud/HA Senior Software Engineer Dirk Müller SUSE OpenStack Senior Software Engineer
SUSE OpenStack Cloud 2
SUSE OpenStack Cloud 3
SUSE OpenStack Cloud 4653 Parameters 4
SUSE OpenStack Cloud 19 Components 5
SUSE OpenStack Cloud Troubleshooting <1 Hour 6
SUSE OpenStack Cloud Functional Blocks 7
SUSE OpenStack Cloud Admin SUSE Cloud Addon Crowbar UI Crowbar Services Chef/Rabbit Install logs: /var/log/crowbar/install.log Chef/Rabbit: /var/log/rabbitmq/*.log /var/log/chef/server.log /var/log/couchdb/couchdb.log Crowbar repo server: /var/log/apache2/provisioner*log Crowbar: /var/log/crowbar/production.{out,log} Repo Mirror SLES 8
Generic SLES Troubleshooting All Nodes in SUSE OpenStack Cloud are SLES based Watch out for typical issues: - dmesg for hardware-related errors, OOM, interesting kernel messages - usual syslog targets, e.g. /var/log/messages Check general node health via: - top, vmstat, uptime, pstree, free - core files, zombies, etc 9
Collecting Logs for SUSE Support supportconfig should be run always on the admin node, and on any affected cloud node supportutils-plugin-susecloud.rpm - installed on all SUSE OpenStack Cloud nodes automatically - collects precious cloud-specific information for further analysis In Case of HA, use hb_report or crm history or Hawk to assemble chronological crosscluster log Saves a lot of pain strongly recommended! 10
Admin Node: Crowbar UI Use Export Items in the Crowbar UI in order to export various log files 11
Troubleshooting Discovery issues 12
Network Barclamp in (too much?) detail 13
network.json sections Global options Network Mode: Single, Dual, Team, Offloading Settings Bonding Mode Subsections Interface Map Conduit Map Networks 14
Network Barclamp in (too much?) detail SDN. admin Networks intf1 intf2 Conduits?1g1?1g2 +10g1 Interfaces eth2 eth0 eth1 Devices 15
Network.json Interface Maps By default, the order of the devices is determined by alphabetic sorting of /sys/class/net The interface map can be used to change the order the interfaces of nodes Custom interface map entries are matched by system name (dmidecode output) or serial number 16
Network.json Interface Map Example { "serial": "1234", "bus_order": [ "0000:00/0000:00:01.0/0000:02:00.0", "0000:00/0000:00:01.0/0000:02:00.1" ] },{ "pattern": "PowerEdge R620", "bus_order": [ "0000:00/0000:00:01.1/0000:01:00.0", "0000:00/0000:00:01.0/0000:02:00.1" ] } 17
Creating Interface Map Cheat Sheet Alternative commands on the admin node: knife node show <node> -a "dmi.system.product_name" knife node show <node> -a "dmi.system.serial_number" knife node show <node> -a "crowbar_ohai.detected.network" Delete ( forget ) the discovered node before updating the network proposal. 18
Useful tools for network.json (Cloud 7 +) To find out to which real interface a crowbar interface name maps network-json-resolve network-json \ network.json resolve <nodename> 1g1 To find out the interfaces used by the crowbar networks: network-json-resolve network-json \ network.json networks <nodename> 19
Admin Node: Using Chef
SUSE OpenStack Cloud Admin Populate ~root/.ssh/authorized_keys prior install Barclamp install logs: /var/log/crowbar/barclamp_install Node discovery logs: /var/log/crowbar/sledgehammer/d<macid>.<domain>.log Syslog of crowbar installed nodes sent via rsyslog to: /var/log/nodes/d<macid>.log 21
Useful Tricks Root login to the Cloud installed nodes should be possible from admin node (even in discovery stage) If admin network is reachable: ~/.ssh/config: host 192.168.124.* StrictHostKeyChecking no user root 22
SUSE OpenStack Cloud Admin If a proposal is applied, chef client logs are at: /var/log/crowbar/chef-client/<macid>.<domain>.log Useful crowbar commands: crowbar machines help crowbar transition <node> <state> crowbar <barclamp> proposal list show <name> crowbar <barclamp> proposal delete default crowbar reset nodes crowbar reset proposal <barclamp> default 23
Admin Node: Crowbar Services Nodes are deployed via PXE boot: /srv/tftpboot/discovery/pxelinux.cfg/* Installed via AutoYaST; profile generated to: /srv/tftpboot/nodes/d<mac>.<domain>/autoyast.xml Can delete & rerun chef-client on the admin node Can add useful settings to autoyast.xml: <confirm config:type="boolean">true</confirm> (don t forget to chattr +i the file) 24
Admin Node: Crowbar UI Raw settings in barclamp proposals allow access to "expert" (hidden) options Most interesting are: debug: true verbose: true 25
Admin Node: Crowbar Gotchas Be patient - Only transition one node at a time - Only apply one proposal at a time Cloud nodes should boot from: 1. Network 2. First disk 26
SUSE OpenStack Cloud Node (Controller, Compute, Storage..) Cloud Node SUSE OpenStack Cloud Node specific services All managed via Chef: /var/log/chef/client.log rcchef-client status chef-client can be invoked manually Chef Client SLES 27
High Availability
What is High Availability? Availability = Uptime / Total Time 99.99% ( 4 nines ) == ~53 minutes / year 99.999% ( 5 nines ) == ~5 minutes / year High Availability (HA) Typically accounts for mild / moderate failure scenarios e.g. hardware failures and recoverable software errors automated recovery by restarting / migrating services HA!= Disaster Recovery (DR) Cross-site failover Partially or fully automated HA!= Fault Tolerance 29
Internal architecture 30
Resource Agents Executables which start / stop / monitor resources RA types: systemd services Used for Apache, HAProxy in Cloud 6+ LSB init scripts OCF scripts (~ LSB + meta-data + monitor action +...) /usr/lib/ocf/resource.d/ Used for DRBD, VIP Legacy Heartbeat RAs (ancient, irrelevant) 31
Results of resource failures If fail counter is exceeded, clean-up is required: crm resource cleanup $resource Failures are expected: when a node dies when storage or network failures occur Failures are not expected during normal operation: applying a proposal starting or cleanly stopping resources or nodes Unexpected failures usually indicate a bug! Do not get into the habit of cleaning up and ignoring! 32
crm configure graph FTW! 33
Before diagnosis Understand initial state / context crm conf igure graph is awesome! crm_mon Which fencing devices are in use? What's the network topology? What was done leading up to the failure? Look for first (relevant) failure Failures can cascade, so don't confuse cause and effect Watch out for STONITH 34
Diagnosis What failed? Resource? Node? Orchestration via Crowbar / chef-client? cross-cluster ordering Pacemaker config? (e.g. incorrect constraints) Corosync / cluster communications? chef-client logs are usually a good place to start More on logging later 35
Symptoms of resource Failures Failures reported via Pacemaker UIs Failed actions: neutron-ha-tool_start_0 on d52-54-01-77-77-01 'unknown error' (1): call=281, status=complete, last-rc-change='thu Jun 4 16:15:14 2015', queued=0ms, exec=1734ms neutron-ha-tool_start_0 on d52-54-02-77-77-02 'unknown error' (1): call=259, status=complete, last-rc-change='thu Jun 4 16:17:50 2015', queued=0ms, exec=392ms Services become temporarily or permanently unavailable Services migrate to another cluster node 36
Symptoms of node failures Services become temporarily or permanently unavailable, or migrated to another cluster node Node is unexpectedly rebooted (STONITH) Crowbar web UI may show a red bubble icon next to a controller node Hawk web UI stops responding on one of the controller nodes (should still be able to use the others) ssh connection to a cluster node freezes 37
Symptoms of orchestration failures Proposal / chef-client failed Synchronization time-outs are common and obvious INFO: Processing crowbar-pacemaker_sync_mark[wait-keystone_database] action guess (keystone::server line 232) INFO: Checking if cluster founder has set keystone_database to 5... FATAL: Cluster founder didn't set keystone_database to 5! Find synchronization mark in recipe: crowbar_pacemaker_sync_mark "wait-keystone_database" # Create the Keystone Database database "create #{node[:keystone][:db][:database]} database" do... So node timed out waiting for cluster founder to create keystone database i.e. you're looking at the wrong log! So... root@crowbar:~ # knife search node founder:true -i 38
HA related Logging All changes to cluster configuration driven by chef-client either from application of barclamp proposal admin node: /var/log/crowbar/chef-client/$node.log or run by chef-client daemon every 900 seconds /var/log/chef/client.log on each node Remember chef-client often runs in parallel across nodes All HAE components log to /var/log/messages, /var/log/pacemaker.log on each cluster node Nothing Pacemaker-related on admin node 39
HAE Logs Which nodes' log files to look at? Node failures: /var/log/messages from DC Resource failures: /var/log/messages from DC and node with failed resource but remember the DC can move around (elections) Use hb_report or crm history or Hawk to assemble chronological cross-cluster log Saves a lot of pain strongly recommended! 40
Syslog messages to look out for Fencing going wrong pengine[16374]: warning: cluster_status: We do not have quorum - fencing and resource management disabled pengine[16374]: warning: stage6: Node d52-54-08-77-77-08 is unclean! pengine[16374]: warning: stage6: Node d52-54-0a-77-77-0a is unclean! pengine[16374]: notice: stage6: Cannot fence unclean nodes until quorum is attained (or no-quorum-policy is set to ignore) Fencing going right crmd[16376]: notice: te_fence_node: Executing reboot fencing operation (66) on d52-54-0a-77-77-0a (timeout=60000) stonith-ng[16371]: notice: handle_request: Client crmd.16376.f6100750 wants to fence (reboot) 'd52-54-0a-77-77-0a' with device '(any)' Reason for fencing is almost always earlier in log. Don't forget all the possible reasons for fencing! Lots more get used to reading /var/log/messages! 41
Stabilising / Recovering a Cluster Should only be needed in corner cases, such issues need investigation from SUSE Start with a single node Stop all others rcchef-client stop rcpacemaker stop rccrowbar_join stop Clean up any failures crm resource cleanup crm_resource -o \ awk '/\tstopped Timed Out/ { print $1 }' \ xargs -n1 crm resource cleanup Make sure chef-client is happy 42
Stabilising / recovering a cluster (cont.) Add another node in rm /var/spool/corosync/block_automatic_start rcpacemaker start rccrowbar_join start Ensure nothing gets fenced Ensure no resource failures If fencing happens, check /var/log/messages to find out why, then rectify cause Repeat until all nodes are in cluster 43
Degrade Cluster for Debugging crm configure location fixup-cl-apache cl-apache \ rule -inf: '#uname' eq $HOSTNAME Allows to degrade an active/active resource to only one instance per cluster Useful for tracing requests 44
TL; DR: Just Enough HA crm configure show crm_mon [-1] (? for help) crm resource restart <X> crm resource cleanup <X> 45
Compute HA Architecture 46
Verifying compute node failure detection Pacemaker monitors compute nodes via pacemaker_remote. If compute node failure detected: compute node is fenced crm_mon etc. will show node unclean / offline Pacemaker invokes fence-nova as secondary fencing resource crm configure show fencing_topology Find node running fence_compute: crm resource show fence-nova 47
Verify secondary fencing fence_compute script: tells nova server that node is down updates attribute on compute node to indicate node needs recovery Log files: /var/log/nova/fence_compute.log /var/log/messages on DC and node running fence-nova Verify attribute state via: attrd_updater --query --all --name=evacuate 48
Verify compute node failure recovery process NovaEvacuate spots attribute and calls nova evacuate root@controller1:~ # crm resource show nova-evacuate resource nova-evacuate is running on: d52-54-77-77-77-02 nova resurrects VM on other node root@controller2:~ # grep nova-evacuate /var/log/messages NovaEvacuate [...] Initiating evacuation NovaEvacuate [...] Completed evacuation Warning: no retries if resurrection fails! 49
Process failures pacemaker_remote looks after key compute node services. Use crm resource show cl-g-nova-compute to find out which services it looks after Pacemaker will automatically restart dead processes 50
Now Coming to OpenStack
OpenStack Architecture Diagram 52
OpenStack Block diagram Accesses almost everything Keystone: SPOF 53
OpenStack Architecture Typically each OpenStack component provides: an API daemon / service one or many backend daemons that do the actual work openstack / <prj> command line client to access the API <proj>-manage client for admin-only functionality dashboard ("Horizon") Admin tab for a graphical view on the service uses an SQL database for storing state 54
OpenStack Packaging Basics Packages are usually named: openstack-<codename> usually a subpackage for each service (-api, -scheduler, etc) log to /var/log/<codename>/<service>.log each service has an init script: dde-ad-be-ff-00-01:~# rcopenstack-glance-api status Checking for service glance-api...running 55
OpenStack Debugging Basics Log files often lack useful information without verbose enabled TRACEs of processes are not logged without verbose Many reasons for API error messages are not logged unless debug is turned on Debug is very verbose (>10GB per hour) https://ask.openstack.org/ http://docs.openstack.org/liberty/ 56
OpenStack Architecture Accesses almost everything Keystone: SPOF 57
OpenStack Dashboard: Horizon /var/log/apache2 openstackdashboarderror_log Get the exact URL it tries to access! Enable debug in Horizon barclamp Test components individually 58
OpenStack Identity: Keystone Needed to access all services Needed by all services for checking authorization Use keystone token-get to validate credentials and test service availability 59
OpenStack Imaging: Glance To validate service availability: openstack image list openstack image save file /dev/null <id> openstack image show <id> Don t forget hypervisor_type property! 60
OpenStack Compute: Nova nova service-list 61
Nova Overview Scheduler API Conductor Compute Compute Compute "Launches" go to Scheduler; rest to Conductor 62
Nova Booting VM Workflow 63
Nova: Scheduling a VM 64
OpenStack Volumes: Cinder Scheduler API Volume Volume Volume Volume 65
OpenStack Cinder: Volumes 66
OpenStack Networking: Neutron Swiss Army knife for SDN neutron agent-list neutron net-list neutron port-list neutron router-list There's no neutron-manage Connectivity through legacy router (distributed = False): on the Network Node Connectivity through DVR router (distributed = True): On the network node for SNAT traffic DHCP traffic. On the compute node(s) for Floating IPs Traffic between project networks 67
OpenStack Networking: Distributed Virtual Router Every Compute Node handles inter-vm traffic between project (tenant) networks Mapping of Floating IPs to project network Metadata for VMs Network node handles SNAT routing 68
Neutron setup (non-dvr) 69
Neutron setup (with DVR) 70
OpenStack Neutron : Where are my packets? tcpdump at.. VM connectivity through legacy router (distributed set to False): on the Network node(s) assigned to that router VM connectivity through DVR router (distributed set to True): On the Network node for SNAT traffic DHCP traffic On the Compute node for Floating IPs Traffic between project (tenant) networks 71
Basic Network Layout 72
Networking with OVS: Compute Node http://docs.openstack.org/havana/configreference/content/under_the_hood_openvswitch.html 73
Networking with LB: Compute Node 74
Neutron Troubleshooting 75
Q&A http://ask.openstack.org/ http://docs.openstack.org/ https://www.suse.com/documentation/suse-openstack-cloud-6// Thank you 77
Bonus Material 78
OpenStack Orchestration: Heat 79
OpenStack Orchestration: Heat Uses Nova, Cinder, Neutron to assemble complete stacks of resources heat stack-list heat resource-list show <stack> heat event-list show <stack> Usually necessary to query the actual OpenStack service for further information 80
OpenStack Imaging: Glance Usually issues are in the configured glance backend itself (e.g. RBD, swift, filesystem) so debugging concentrates on those Filesytem: RBD: /var/lib/glance/images ceph -w rbd -p <pool> ls 81