Documentation for Vmware-Monitor for Xymon (VMX) 1.1.0_b2

Vmware-Monitor for Xymon (VMX)

an extension for Xymon to monitor VMware hosts

Description:Documentation for Vmware-Monitor for Xymon (VMX)
Author: Thomas Eckert <thomas.eckert@it-eckert.de>
Contact: VMX-related questions to <vmx@it-eckert.de>
Date: 2010-08-29
Version: 1.1.0_b2
Copyright: Copyright © 2009, 2010 Thomas Eckert, http://www.it-eckert.de/
Info:see <http://www.it-eckert.de/software/vmx> for more information

Table of Contents

1   Overview

Vmware-Monitor for Xymon (VMX) is an extension/addon for the Xymon systems- and network-monitoring system <http://www.xymon.com/> providing a complete monitoring of a virtual environment based on VMware ESX or VMware-Server without installing additional software on the monitored hosts.

This monitoring includes tracking of the CPU- and RAM-usage for each running virtual machine (columns vm_cpu and vm_mem), an overview for each VMware-host (column vmware) and a one-click overview for all monitored VMware hosts (column vmwareO). For each indiviual VM there is also a vm_status column that provides additional information about hat VM (like vmware-tools status, number of VMDKs and vCPUs, ...).

Started and stopped VMs are also recognized as are moves of VMs between hosts (no matter if that move was initiated manually, by vMotion or an other mechanism).

The results are displayed in five additional status-columns on your Xymon webinterface. These columns are hyperlinked so that it is very easy to navigate back and forth in the "dependency-tree" of your virtual environment.

2   Features

Currently available features/highlights in Vmware-Monitor for Xymon (VMX).

3   License

You are allowed to evaluate Vmware-Monitor for Xymon (VMX) for up to 60 days without any limitations. If it proves useful for you then you are required to obtain a license in order to be allowed to use it on a regular basis.

For details on licencing please see <http://www.it-eckert.de/software/vmx>

4   Design

Vmware-Monitor for Xymon (VMX) is designed as a client-server plugin for Xymon.

The vmware-monitorc (client) collects data from the VMware hosts (ESX or VMware-Server) via the offical VMware APIs on the offical vSphere Management Assistant (vMA) (see [1] for more information). The collected data is sent to the Xymon server (through the "user"-channel) where the data is received by the vmware-monitord (server).

The actual status messages are then generated and sent to the (local) Xymon server by vmware-monitord.

The default installation of Vmware-Monitor for Xymon (VMX) was changed in v1.1.0 to ease the deployment in Hobbit setups: vmware-monitord can now be installed on the vMA too. This eliminates the dependency on user-channel patches on the hobbit-server instead the setup on the vMA is extended to included a stripped-down server.

TODO: details on vMA-setup, differences of "local-mode" and "server-mode" need to be documented.

5   Installation

As for every installation: backup the files you intend to change / even better make a complete backup of your Xymon-installation before you begin. This is not absolutely necessary as there is not automatic installation routine and so you have full control what is changed -- but backups are always a good idea before changing anything in a production environment (if you just want to test Vmware-Monitor for Xymon (VMX) you could install -- or maybe clone if your Xymon server is installed as a VM already -- a VM with a fresh install of Xymon).

As a completely risk-free option you could (probably should) build a vmware-monitoring-only environment (at lest for testing) by setting up a fresh Xymon server and vMA and install Vmware-Monitor for Xymon (VMX) on that "island". In that case I would suggest to install the current SVN-snapshot of the "4.3.0"-branch of Xymon (as the split-ncv and user-channel fixes is already integrated there). For Gentoo users an ebuild is included in the tarball.

The directory-structure of the installation-tarball is similar to that of a standard Xymon (or hobbit) installation, so it is pretty self-explaining where the files have to be placed. After a short requirements-section the installation for server and client is described in detail below. This includes instructions with the necessary modifications of the configuration files in your Xymon installation.

5.1   Requirements

General note: The "user"-channel of Xymon is used to transfer the collected data to the Xymon server -- if any other data is transfered on that channel conflicts may occur. Vmware-Monitor for Xymon (VMX) should be rather robust and tolerant if "non-VMX" messages are received. Please let me know if you encounter problems with concurrent datagrams on the user-channel so that a solution can be developed.

5.1.1   server-component (vmware-monitord)

  • gawk >=3.1.3 (various gawk-specific functionalities are used, e.g. TCP/IP networking, fflush(), ...). By default GAWK is expected to be accessible at /bin/gawk
  • Xymon/Hobbit server:
  • xymon 4.2.3: without any patches
  • xymon-4.3.0_beta2: with "splitncv-patch" or better (the "splitncv-patch" is included in the 4.3.0 branch of Xymon since 2009-11-14)
  • hobbit-4.2.0: due to missing SPLITNCV-functionality the included alternative graphing-definitions have to be used
In short: a working "user"-channel and "splitncv"-module are needed on the Xymon server. Gentoo ebuilds for Xymon can be found in ./contrib/gentoo/.
  • server-component needs to be installed on the Xymon server

5.1.2   client-component (vmware-monitorc)

  • vMA 4.0 from VMware or equivalent tools (namely utility-programs from vmware-vSphere-CLI-4.0.0-161974), see [1]
  • hobbit-client-4.2.0 or better. vmware-monitorc only uses the bb(1) command to send data to the Xymon server via the "user"-channel -- this works fine with a vanilla hobbit-4.2.0. As for Vmware-Monitor for Xymon (VMX) it is not necessary to upgrade the client to Xymon. Suitable RPMs can be found at [2]
  • the bash-shell of course
  • in particular you do NOT NEED the following:
  • vCenter server
  • additional database system(s)
  • Windows Server license

5.2   installing the server

The server installation consists of two logical parts:

  1. installing and configure the "data-processing"
  2. configure the "graphing" system and the graphs to be shown

The data-processing is done by vmware-monitord which runs under the control of hobbitd_channel(8) and processes the messages send by the client (vmware-monitorc) to generate the Xymon "status"-messages.

The graphing data is extracted from the status messages via the NCV-module of Xymon (in fact SPLITNCV is used).

5.2.1   data-processing

  • copy server/bin/vmware-monitord to ~/server/bin/ on your Xymon server and check if the shebang (#!/bin/gawk) fits your path to gawk and adjust if needed. The default of /bin/gawk should work on almost any Linux distribution.

  • enable starting of vmware-monitord with the default 5 minute test-interval by appending the following lines to ~/server/etc/hobbitlaunch.cfg (paths may need to be adjusted to fit your setup):

    [vmware-monitord]
            ENVFILE /usr/lib/xymon/server/etc/hobbitserver.cfg
            NEEDS hobbitd
            CMD hobbitd_channel --channel=user --log=$BBSERVERLOGS/vmware-monitord.log vmware-monitord
    

    (vmx-hobbitlaunch.cfg is included to be easily appended to your config)

  • if you have set "--status-senders=IP[/MASK][,IP/MASK]" for "hobbitd" in hobbitlaunch.cfg you may need to add your processing-hosts IP

  • configuration-file server/etc/vmware-monitor.cfg: 2 options:

    1. If you want to prevent downloading of the configuration for security reasons (it contains passwords and the Xymon-connection is not encrypted by default) then copy the configuration file vmware-monitor.cfg to ~/server/etc/. This "download-prevention" can be used for the bb-hosts-file too, see VMX configuration with vmware-monitor.cfg.
    2. If you want to use the central-configuration, i.e. all configuration done on the Xymon server then copy vmware-monitor.cfg to $HOBBITCLIENTHOME/etc/ on the vMA.

    Details about needed adjustments are given in the section VMX configuration with vmware-monitor.cfg.

  • add the "vmware"-test to your VMware hosts in ~/server/etc/bb-hosts.

Example

Let esx1.local and esx2.local be our VMware hosts. Define these 2 hosts in our bb-hosts with the "vmware"-tag as follows:

1.2.3.4 esx1.local      # [ other tests ] vmware [ other tests ]
1.2.3.5 esx2.local      # [ other tests ] vmware [ other tests ]
  • add your VMs with their "displaynames" from the VMware environment to bb-hosts.

    If your VMs displaynames are not identical with the hostnames of the VMs then you could use CLIENT:<displayname> in your bb-hosts for the VM or (really not recommended!) add a dummy-entry with "0.0.0.0" and "noconn".

Example

Your vMA has the hostname "vma.local" but after you installed the Virtual Appliance you did not rename the VM and thus the displayname is still "vMA-ovf-4.0.0-161993". In this case add the following to you bb-hosts:

192.168.1.1    vma.local       # CLIENT:"vMA-ovf-4.0.0-161993"

Warning

Using the CLIENT:-directive in bb-hosts to overwrite the reported hostnames soon leads to an overly complex setup. Another negative side-effect of using CLIENT: is the breaking of hyperlinks between the test-columns generated by Vmware-Monitor for Xymon (VMX).

So you should only use the CLIENT:-directive if changing the displayname of a VM is not possible for serious reasons.

5.2.2   graphing

  • copy server/etc/hobbitgraph.d/vmware-monitor-graph.cfg to your hobbitgraph.d-directory if you use directory <path-to-hobbitgraph.d> in your ~/server/etc/hobbitgraph.cfg. CAUTION: after the directory-statement there is only one space allowed!

    OR:

    append the contents of server/etc/hobbitgraph.d/vmware-monitor-graph.cfg to ~/server/etc/hobbitgraph.cfg

  • adjust ~/server/etc/hobbitserver.cfg as follows: using cut-n-paste from the included server/etc/vmx-hobbitserver.cfg is probably the savest way as this documentation is aimed at printing and web-display; take special care with line continuation characters!

  1. append to variable TEST2RRD:

    ,vmwareO=ncv,vmware=ncv,vm_cpu=ncv,vm_mem=ncv
    
  2. insert the following lines (e.g. right after TEST2RRD):

    ## Vmware-Monitor for Xymon (VMX):
    ## append "vmwareO=ncv,vmware=ncv,vm_cpu=ncv,vm_mem=ncv" to TEST2RRD
    SPLITNCV_vmwareO="vmwarehosts:GAUGE,physcpuavail:GAUGE,physcpuused:GAUGE,physcpuused_pct:GAUGE,\
            physramavail:GAUGE,physramused:GAUGE,physramused_pct:GAUGE,\
            numbervms:GAUGE,vcpurunning:GAUGE,runtime_c:GAUGE,runtime_d:GAUGE,*:NONE"
    SPLITNCV_vmware="cpuuse:GAUGE,memuse:GAUGE,cpuuse_pct:GAUGE,memuse_pct:GAUGE,\
            numbervms:GAUGE,vcpurunning:GAUGE,bla:DERIVE,*:NONE"
    SPLITNCV_vm_cpu="cpuuse:GAUGE,vcpurunning:GAUGE,*:NONE"
    SPLITNCV_vm_mem="memuse:GAUGE,memuse_pct:GAUGE,*:NONE"
    ## Vmware-Monitor for Xymon (VMX) end.
    

Warning

DO NOT use line continuation characters ("\") in hobbitserver.cfg, these are only used to make the documentation more readable.

  • if you want the vmware-related graphs to show up on the "trends" column you have to add them to the GRAPHS-variable in ~/server/etc/hobbitserver.cfg.

Example

append:

,vmware::10,vmwareO::10,vm_cpu,vm_mem

to the GRAPHS variable.

Or insert them at a position of your choice in GRAPHS, e.g. "vmware,vmwareO" at the beginning and insert "vm_cpu" right after "la" and "vm_mem" after "memory".

In your bb-hosts-file you could also make use of the TRENDS:-statements.

Example

You could append:

TRENDS:\*,vmwareO:vmwareO_cpu|vmwareO_mem|vmwareO_counters|vmwareO_runtime

to show all available graphs for vmwareO in a more readable form. A slight problem with empty/broken graphs showing up in this case needs to be investigated and fixed though.

TODO: XXX: elimnate * above!
  • restart Xymon on the server (/etc/init.d/xymon restart or whatever mechanism you use) to activate the changes

5.3   installing the client

  • install vMA somewhere (see [1] for additional notes)

  • install hobbit-/xymon-client on vMA (hobbit-4.2.0+ is fine) and configure the client to report to your Xymon server (see [2] for download link for RPMs.

    Note: for example the xymon-client-4.2.3 rpm is configured in /etc/sysconfig/xymon-client; after changing this file at least one start of the client via the init.d-script is necessary to create the needed runtime-files.

  • copy client/ext/vmware-monitorc to ~/client/ext/-dir on your vMA

  • add vmware-monitorc to ~/client/etc/clientlaunch.cfg on vMA to send data to the server by appending the following:

    [vmware-monitorc]
            ENVFILE $HOBBITCLIENTHOME/etc/hobbitclient.cfg
            CMD $HOBBITCLIENTHOME/ext/vmware-monitorc -s
            LOGFILE $HOBBITCLIENTHOME/logs/vmware-monitorc.log
            INTERVAL 5m
    

    (vmx-clientlaunch.cfg is included to be easily appended to your config)

  • add your vMA to bb-hosts on the Xymon server of course

  • for security reasons you may want to put the configuration file vmware-monitor.cfg into ~/client/etc/ on the vMA (see vmware-monitor.cfg in the data-processing section above for details).

5.3.1   Performance considerations for vMA

[ Note: The information in this section is not relevant for a VMware environment of up to 4 hosts with a reasonable load. It might be interresting to tune your setup anyway but it is not necessary for normal operations. ]

Querying the VMware hosts for all the required data can generate significant load on the vMA. In addition it takes several seconds to fetch all the data needed from one single host (the individual runtime depends on a variety of factors, around 20 seconds is common). This becomes a problem if a large number of hosts are monitored as the time to query all hosts might exceed the 5 minute test-interval of Xymon.

For that reason Vmware-Monitor for Xymon (VMX) has the possibility to query VMware hosts in parallel. This feature is disabled by default (CLIENT_CONCURRENCY=1).

In a 7 ESX 3.5 environment the runtime of vmware-monitorc is roughly 145 seconds with a very low load on the vMA-VM. Increasing the concurrenty to 4 only gives a few seconds of runtime-savings but a highly (over) loaded vMA. Adding a 2nd vCPU to the vMA (and increasing the memory of the VM to 1 GB) the runtime drops below 1 minute with a load on the vMA in reasonabe ranges.

In short: if the concurrency of vmware-montirc is increased to reduce runtime_c adjustments to the vMA-VM might be needed to avoid overload that prevents a runtime-reduction.

5.3.2   Testing the connection to VMware hosts and finalizing the client-installation

After installing the client-stuff it is time to test if data can be fetched from the VMware hosts at all. Login to your vMA and su to the hobbit- or xymon-user. The following steps outline the testing-process:

  1. Basic test using the vCLI utility hostinfo.pl to fetch data from the hosts: adjust 192.168.1.1:443 in the example by the IP and port-number you entered in bb-hosts and vmware-monitor.cfg respcectively and the username and password you specified there:

    /usr/lib/vmware-vcli/apps/host/hostinfo.pl --url https://192.168.1.1:443/sdk
    
If everything works fine you'll get "Host Information" back (Host Name, Port Number, BootTime, ...). If not check the URL (incl. the portnumber) for spelling error and the username and password.
  1. Run the vmware-monitorc in debug-mode so no data is sent to the Xymon server yet. Instead everything is printed to stdout. This will fetch the needed configuration-files from the Xymon server (or use the ones you placed in $HOBBITCLIENTHOME/etc/ on the vMA) and query all VMware hosts. The results will be printed to stdout and no data will be sent to the Xymon server:

    ~/client/bin/bbcmd /usr/lib/hobbit/client/ext/vmware-monitorc -d
    

    If errors occur check Possible problems and solutions.

  2. Run the above command without the "-d" debug-option to send one data-set to the Xymon server. Most likely you will have some ghost-clients for your VMs. You can add them to the bb-hosts file or leave them as ghosts for now.

  3. If the testing above was successful start the hobbit-/xymon-client on vMA:

    /etc/init.d/hobbit-client start (or: /etc/init.d/xymon-client start)
    

5.3.3   Possible problems and solutions

If no data is sent to the Xymon server you can do the following:

  • check the logiles vmware-monitor.log both on the client and server for errors.

  • run the client in debug-mode and look for reported errors, e.g. the following indicates problems with a multi-BBD-environment (see next point for details):

    $ ~/client/bin/bbcmd ~/client/ext/vmware-monitorc -d
    2010-01-11 04:43:46 Using default environment file /usr/lib64/xymon/client/etc/hobbitclient.cfg
    2010-01-11 04:43:46 ERROR: /usr/lib64/xymon/client/tmp/vmware-monitor.cfg does not exist or is empty!
    
  • multi BBD environments: if the server installation of VMX is not done on the 1st BBD it is required to temporarily use the "vmware-monitord-BBD" as the 1st one in your vMA hobbit/xymon-configuration (i.e. make sure the IP of the Xymon server running vmware-monitord is the 1st one listed in the BBDISPLAYS-variable on your vMA). This is necessary for two reasons:

  1. the central configuration uses the config command of bb(1) which only uses the 1st BBD configured. The bb-hosts- and vmware-monitor.cfg-configfiles are fetched with this mechanism by vmware-monitorc.
  2. the data is only sent to the 1st BBD via the user-channel.

5.4   Quick check if everything is running on the Xymon server

On the server you should see the new file $BBSERVERLOGS/vmware-monitord.log (e.g. /var/log/xymon/) and upon reception of the 1st usermessage from the client 2 lines similar to the following:

Example

First 2 lines logged in vmware-monitord.log after the 1st dataset was received:

2009-12-06 14:07:24 vmware-monitord v0.9.9 started, pid=17211, ppid=16962
2009-12-06 14:07:24 done with dataset 1

TODO: XXX: update log-lines to current version

For the first 12 datasets a log-entry is generated for each one ("done with dataset N"), after that only every 12th dataset is logged to reduce the growth-rate of the logfile (only one log-entry per hour).

On the client the logfile is $HOBBITCLIENTHOME/logs/vmware-monitorc.log (e.g. ~/client/log/).

5.5   VMX configuration with vmware-monitor.cfg

The configuration is split into a client- and server-section that are used by vmware-monitorc and vmware-monitord respectively.

Adjust vmware-monitor.cfg to fit your setup. Filling in the username and password for your VMware hosts and the management-port used is required for Vmware-Monitor for Xymon (VMX) to be able to fetch data from the hosts.

5.5.1   security considerations and "download-prevention"

As already outlined in data-processing the configuration file is tranfered non-encrypted over the network (currently Xymon does not support encrypted server-client-communication by itself). As login information is contained in the configuration you may want to prevent these from being tranfered in clear over the network. To do so use the "download-prevention" of Vmware-Monitor for Xymon (VMX).

The "download-prevention" also works for the bb-hosts-file. So if for some reason you want the client not to download an up-to-date bb-hosts-file from the server just place a bb-hosts-file in $HOBBITCLIENTHOME/etc/ on the vMA.

Please note that both vmware-monitorc and vmware-monitord use the configuration, so if you make use of "download-prevention" on the client you still need to install vmware-monitor.cfg on the Xymon server in ~/server/etc/!

5.5.2   client-options (for vmware-monitorc)

The client-section of the configuration-file ~/server/etc/vmware-monitor.cfg holds the login-data to the VMware hosts monitored and a tuning-parameter to optimize the runtime of vmware-monitorc.

It is strongly recommended to create a read-only account on your VMware hosts for the monitoring user. Add a user (e.g. "xymon") with read-only permissions to each of your ESX hosts).

You may also adjust CLIENT_CONCURRENCY to tune the runtime of vmware-monitorc, see Performance considerations for vMA for more information.

Environments with multiple BBDISPLAY-servers (BBDs) may need an additional configuration, depending on the use-case. If multiple BBDs are used to build-up a poor-mans redundant setup where all data is sent to all BBDs you have to configure vmware-monitorc to send data not only to the first BBD listed in $BBDISPLAYS (which btw. is the server the configuration is fetched from) which is the default but to all of them. This is controlled by CLIENT_TOALL_BBDS in vmware-monitor.cfg. Change the default of NO to YES to enable data-sending to all BBDs. Keep in mind that the vMA will remain a single-point-of-failure though.

5.5.3   server-options (for vmware-monitord)

This is the default-configuration for vmware-monitord in the configuration-file:

## StartupColor: [GREEN|YELLOW]
## Color of the VMs in the 1st run of VMX
##      yellow = all "vm_status" (and in turn "vmware" and "vmwareO")
##              columns go yellow
## This forces xymon to record the current state => available via "history"
StartupColor="YELLOW"

## Debug:
##  0=disable debugging
##  1=function-entry
##  2=print func-returns
##  10=output of less noisy funcs
##  20=output of more noisy funcs
Debug="0"

## HostColorChangeOn*VM: [YES|NO]
## Change to color of the "vmware"-column (to yellow) if a VM is
## started/stopped/moved
HostColorChangeOnStartedVM="YES"
HostColorChangeOnStoppedVM="YES"
HostColorChangeOnMovedVM="YES"

## VMColorChangeOn*: [YES|NO]
## Change to color of the "vm_status"-column (to yellow) if a VM is
## started/stopped/moved
VMColorChangeOnStart="YES"
VMColorChangeOnStop="YES"
VMColorChangeOnMove="YES"

6   Graphing options

If you follow the installation-instructions all graphs track their values via the "SPLITNCV"-backend of Xymon, thus each metric is stored in a separate RRD.

There are some alternatives available and pre-configured regarding the data that is actually drawn on the webinterface of Xymon, e.g. if you prefer to have the absolute values in MHz and MB or rather want to show the calculated usages in percent.

Where possible graphs are available with percentage ("_pct") and absolute ("_abs") values. This is the case for all memory-graphs, all cpu-graphs except for the cpu-graph on a single VM (it is not possible to get the number of cores on a vmware hosts via the vSphere API and thus no usage in percent can be calucated).

In vmware-monitor-graph.cfg there are already various graphs defined. See list below for all available graphs; if alternative graphs are available the default-graph is marked with "DEFAULT" (these graphs are named identical to the column-name in the webinterface).

TODO: XXX

7   Usage

This chapter gives hints about using Vmware-Monitor for Xymon (VMX) and explains some details.

[ Work in progress: this chapter is not finished yet! ]

7.1   Columns explained

After the first start of our Xymon server with Vmware-Monitor for Xymon (VMX) installed (more precicely after reception of the first data-set from vmware-monitorc) some additional columns show up on your Xymon webinterface (if you followed this guide this is already the case hopefully). These columns are documented in this section.

A general note about the units used:

  • CPU-usage: "cpuuse" is always the absolute value of the usage in [MHz], "cpuuse_pct" is percentage used of the available cpuspeed in the context of the current view.
  • MEM-usage: "memuse" is in [MB], "memuse_pct" in percent analogous to the CPU-values.

In total there are five additional columns that show up.

vmwareO

Overview for all VMware hosts monitored. This column is only generated for the host running the client vmware-monitorc -- typically your vMA.

Each vmware host in the overview-table at the bottom is hyper-linked to the respective vmware-column of that host.

Column is generated only for the machine running vmware-monitorc, typically your vMA.

Graphs: vmwareO_pct (default), vmwareO_abs, vmwareO_cpu, vmwareO_mem, vmwareO_counters, vmwareO_runtime

vmware

A summary you get for each monitored VMware host ("esx1" and "esx2" in the example above) that graphs cpuuse and memuse.

The number of vCPUs assigned to and the number of currently running VMs are graphed, too.

For each VM in the "running VMs"-table at the bottom the cpuuse- (memuse-) column links to vm_cpu (vm_mem) of that VM. The VMname links to vm_status of the VM. In addition the vmware-tools-status is shown in this table.

An additional link to the vmwareO-column of the vMA that sent the data is somewhat hidden in the >>> client-script-line: the hostname of the machine running vmware-monitorc is the hyperlink.

Graphs: vmware_pct (default), vmware_abs

vm_cpu

CPU-usage and the number of assigned vCPUs for each individual VM running.

Column is generated for each running VM.

Graphs: vm_cpu

vm_mem

Memory-usage for each individual VM running (graphing in absolute values, i.e. [MB]) and in % as an alternative.

Column is generated for each running VM.

Graphs: vm_mem_abs (default), vm_mem_pct

vm_status

Various status-information about the VM: the host it is running on (links back to the hosts "vmware"-column), name, path to vmx-file with datastore, number of VMDKs, number of vCPUs, memory-size, ...

Status of vmware-tools and if tools are running additional information like hostname, ipaddress, guestos, ...

Column is generated for each running VM.

Graphs: vm_status

7.2   Statistics reported in "vmwareO"

(TODO: _not_ complete!)

  • >> client-script statistics:
  • runtime_c is the runtime of vmware-monitorc in seconds and a metric to keep an eye on. If the montoried VMware hosts are highly loaded the runtime may increase significantly. For a 7-host-environment a runtime of ~2 ... 2.5 minutes is reasonable. If the runtime exceeds 3 minutes tuning steps have to take place in order to avoid problems with the 5 minute test-interval. The graph vmwareO_runtime helps tracking the runtime.
  • >> server-script statistics:
  • runtime_d is the time vmware-monitord needed to parse the last client-message (in seconds) and will almost always show "0" as (g)awk only has a time accuracy of one second (to my knowledge). In future versions of Vmware-Monitor for Xymon (VMX) the runtime_d might be calculated as an average from multiple runs to provide something useful.

8   Manual testing / advanced debugging

server-side testing:

  1. (optional but helpful) create a test-file w/ _one_ user-status:
  1. stop the Xymon client on the vMA

  2. on the Xymon server run (as monitoring-user of course):

    ~/server/bin/bbcmd hobbitd_channel --channel=user cat >teststatus.txt
    
  3. on (vMA-) client sent one status (as monitoring user):

    ~/client/bin/bbcmd /usr/lib/hobbit/client/ext/vmware-monitorc
    
  4. CTRL-C the hobbitd_channel from a)

Now teststatus.txt contains one complete user-channel datagram that can be fed to vmware-monitord for debugging.

  1. on the server feed the teststatus.txt through vmware-monitord:

    cat teststatus.txt | ./vmware-monitord
    

    The above command would not send any status to the Xymon server as the Xymon environment is missing. To send the status messages to the Xymon-server use:

    cat teststatus.txt | ~/server/bin/bbcmd ./vmware-monitord
    

simulating a client-message: