Hortium Glossary

Overview #

Hortium is a software company focused on helping IT to shape smart and productive workplaces. We bring clarity to your workplace through a unique combination of real-time analytics, automation, and employee experience testing.

Agent overview #

The Hortium Agent is lightweight software based on patented technology. It Tests, captures and reports application, service, cloud and network connections, program executions, web requests, and many other activities and properties from employee devices or edge device on which it runs. It is implemented as a piece of software and accompanying services, offering remote and automated silent installations with negligible impact on system performance while minimizing network traffic.

Glossary of terms #

EUX Definitions

EUX Score                                – End User Experience – a score from 1/10 showing the Experience being delivered to end user from the services and testing data

Performance against SLA        – The performance of a service test or testing element again the defined SLA.

Availability                              – A score showing the Availability of an element.

Frustration Rate                      – a score of 1/10 showing how frustrated a User maybe with a service or element being delivered

Agent Heat Map                      – The Hortium heat map shows the agent/location/user against the metrics affecting their performance. It identifies hotspots that are causing performance issues across the digital supply chain:

Clear – Meeting Objectives

Attention – Service degradation may occur

Trouble – Service degradation started

Error – Service affecting

Latency                                    – Network latency is the amount of time it takes for a data packet to go from one place to another in Milliseconds (MS)

Bandwidth Download Score   – The bandwidth score is the average of all tests and then a -20% or + 10%.  -10% is poor -20 is bad + 10 is good +20 is excellent

Bandwidth Upload Score        – The bandwidth score is the average of all tests and then a -20% or + 10%. -10% is poor -20 is bad + 10 is good +20 is excellent

Latency Score                          – The latency score is the average of all tests and then a -20% or + 10%. -10% is poor -20 is bad + 10 is good +20 is excellent

DNS                                         – DNS Time is the time it takes a Domain Name Server to receive the request for a domain name’s IP address, process it, and return the IP address to the browser. In Milliseconds (MS)

Transaction Time                                – The time it takes for the service test to complete. In Seconds

Hop                                         – a hop is when a packet is passed from one network segment to the next. In numbers of hops

Count or Count of Tests          – Number of Tests performed during period. Numeric Value

Fails                                         – number of times a service test fails. Numeric Value

Speed                                      – The time it takes for the service test to complete. In Seconds

Performance                           – Overall performance of service tests. Out of 100

Network LAN                           – The Local Area Network in which a Beacon is performing test. This includes local WIFI and LAN connections

Network WAN                         – The Wide Area Network

Cloud                                       – CDN and Cloud Service Provider

Change in Discovered devices – The change in the number of devices discovered vs previous time period. Numeric Value

Change in EUX                         – The change in the EUX score discovered vs previous time period. Numeric Value

Change in Latency                   – The change in the Latency score discovered vs previous time period. Numeric Value

Change in Loss/Failures          – The change in the Loss score discovered vs previous time period. Numeric Value

Location                                  – A location either where the tests are being performed or where the Bandwidth test is being tested from.

Downtime                               – The time in minuets of service downtime.

Cost                                         – the cost of downtime per min based on either the default values (Gartner based) or specified by user.

Connections                            – number of connections taken over a period of time.

Waterfall                                 – a breakdown of specific performance values for an application and service journey

STATUS                        – as the transaction completed – SUCCESS or FAILED

Field Speed index        – Using large data and neural networks AI and ML models it measures how fast the service journey is visually displayed ready for user interaction. The current implementation is based on a interaction visual progress from video capture calculation method.

Field Total Blocking                 – Using large data and neural networks AI and ML models it measures the total amount of time between the initial content of the service and the time to user service interaction where the main task interaction was blocked for long enough to prevent input responsiveness.

Field Time to Interactive         – Using large data and neural networks AI and ML models it measures the time from when the service task starts processing to when its main sub-resources has been processed and the service is capable of reliably responding to user input quickly and completing the service journey. Numeric Value

Speed index                            – it measures how fast the service journey is visually displayed ready for user interaction. The current implementation is based on an interaction visual progress from video capture calculation method.

Total Blocking time                 – measures the total amount of time between the initial content of the service and the time to user service interaction where the main task interaction was blocked for long enough to prevent input responsiveness.

Time to Interactive                 – measures the time from when the service task starts processing to when its main sub-resources has been processed and the service is capable of reliably responding to user input quickly and completing the service journey.

Performance                           – The overall Performance score for the transaction over the period.

Availability                              – The accessibility of the application journey

WIFI                                         – shows the WIFI signal noise in dB – lower numbers are better

SIGNAL STRENGTH QUALITY TO EXPECTR EQUIRED LEVEL FOR:

-30 dBm Maximum signal strength, you are probably standing right next to the access point / router.

-50 dBm Anything down to this level can be regarded as excellent signal strength.

-60 dBm This is still good, reliable signal strength.

-67 dBm This is the minimum value for all services that require smooth and reliable data traffic. VoIP/VoWi-Fi Video streaming/streaming (not the highest quality)

-70 dBm The signal is not very strong, but mostly sufficient. Web, email, and the like

-80 dBm Minimum value required to make a connection. You cannot count on a reliable connection or sufficient signal strength to use services at this level.

-90 dBm It is very unlikely that you will be able to connect or make use of any services with this signal strength.

App                                          – the application or service being tested

Network Server Latency         – Network server latency refers to the time it takes for a server to process and respond to a request from a client. Lower latency means faster response times, while higher latency indicates delays in server processing and can impact application performance.

HTTP Status Code                   – HTTP status codes are three-digit numbers returned by a server in response to a client’s HTTP request. They provide information about the status of the request and can indicate success, redirection, client or server errors, or other specific conditions. These codes help in understanding and troubleshooting the outcome of an HTTP request and are grouped into different categories, such as:

1xx (Informational), 2xx (Success), 3xx (Redirection), 4xx (Client Error), and 5xx (Server Error).

Device Information                 – Information about the monitored device

Devices                                    – Number of devices

Healthy                                    – MTR devices which report as Healthy

Offline                                     – MTR devices which report as Offline

Firmware Status                      – indicates of the Firmware is current or in need of an update

Need Update                           – devices that need updating

Offline                                     – devices which show as offline

Device Source                         – how the device was found

Last Test Date                         – last time the test was run

Last Connected Date               – the last time the device reported as connected

Room                                      – the name of the Room the device is located

Connected Status                    – Indicates if the device is connected

Peripheral Status                    – Indicates if the device peripherals are connected

Active Devices                         – number of devices which report as active

Metric Type #

Default Metric Values #

Metric:                                                Target 
Latency (one way):                              < 50ms 
Latency (RTT or Round-trip Time):     < 100ms 
Burst packet loss:                                <10% during any 200ms interval 
Packet loss:                                         <1% during any 15s interval 
Packet inter-arrival Jitter:                   <30ms during any 15s interval 
Packet reorder:                                   <0.05% out-of-order packets 
DNS:                                                    10ms
Transaction time:                                3 seconds
Number of hops:                                 15
EUX Score:                                          8
SLA score:                                            70%
Availability:                                         95%
Bandwidth Score:                               – average across tests as a %: 80%
number of transactions:                     95%
MOS score:                                         4
Total down time of agent:                  5%

Powered by BetterDocs