- Using PTP hardware
- About PTP hardware
- About PTP
- Installing the PTP Operator using the CLI
- Installing the PTP Operator using the web console
- Automated discovery of PTP network devices
- Configuring linuxptp services as ordinary clock
- Configuring linuxptp services as boundary clock
- Troubleshooting common PTP Operator issues
- About PTP and clock synchronization error events
- About the PTP fast event notifications framework
- Installing the AMQ messaging bus
- Configuring the PTP fast event notifications publisher
- PTP fast event notifications REST API reference
- Monitoring PTP fast event metrics using the CLI
- Monitoring PTP fast event metrics in the web console
Using PTP hardware
- About PTP hardware
- About PTP
- Installing the PTP Operator using the CLI
- Installing the PTP Operator using the web console
- Automated discovery of PTP network devices
- Configuring linuxptp services as ordinary clock
- Configuring linuxptp services as boundary clock
- Troubleshooting common PTP Operator issues
- About PTP and clock synchronization error events
- About the PTP fast event notifications framework
- Installing the AMQ messaging bus
- Configuring the PTP fast event notifications publisher
- PTP fast event notifications REST API reference
- Monitoring PTP fast event metrics using the CLI
- Monitoring PTP fast event metrics in the web console
Precision Time Protocol (PTP) hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
About PTP hardware
OKD allows you use Precision Time Protocol (PTP) hardware on your nodes. You can configure linuxptp services on nodes that have PTP-capable hardware.
The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure. |
You can use the OKD console or oc
CLI to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features:
Discovery of the PTP-capable devices in the cluster.
Management of the configuration of linuxptp services.
Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator
cloud-event-proxy
sidecar.
About PTP
The Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP).
The linuxptp
package includes the ptp4l
and phc2sys
programs for clock synchronization. ptp4l
implements the PTP boundary clock and ordinary clock. ptp4l
synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. phc2sys
is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface controller (NIC).
Elements of a PTP domain
PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The following type of clocks can be included in configurations:
Grandmaster clock
The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. The grandmaster clock writes time stamps and responds to time requests from other clocks.
Ordinary clock
The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps.
Boundary clock
The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
Advantages of PTP over NTP
One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled.
Hardware-based PTP provides optimal accuracy, since the NIC can time stamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system.
Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service ( |
Installing the PTP Operator using the CLI
As a cluster administrator, you can install the Operator by using the CLI.
Prerequisites
A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.
Procedure
To create a namespace for the PTP Operator, enter the following command:
$ cat << EOF| oc create -f -
apiVersion: v1
kind: Namespace
metadata:
name: openshift-ptp
annotations:
workload.openshift.io/allowed: management
labels:
name: openshift-ptp
openshift.io/cluster-monitoring: "true"
EOF
To create an Operator group for the Operator, enter the following command:
$ cat << EOF| oc create -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: ptp-operators
namespace: openshift-ptp
spec:
targetNamespaces:
- openshift-ptp
EOF
Subscribe to the PTP Operator.
Run the following command to set the OKD major and minor version as an environment variable, which is used as the
channel
value in the next step.$ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
grep -o '[0-9]*[.][0-9]*' | head -1)
To create a subscription for the PTP Operator, enter the following command:
$ cat << EOF| oc create -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ptp-operator-subscription
namespace: openshift-ptp
spec:
channel: "${OC_VERSION}"
name: ptp-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
To verify that the Operator is installed, enter the following command:
$ oc get csv -n openshift-ptp \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
Example output
Name Phase
ptp-operator.4.4.0-202006160135 Succeeded
Installing the PTP Operator using the web console
As a cluster administrator, you can install the PTP Operator using the web console.
You have to create the namespace and operator group as mentioned in the previous section. |
Procedure
Install the PTP Operator using the OKD web console:
In the OKD web console, click Operators → OperatorHub.
Choose PTP Operator from the list of available Operators, and then click Install.
On the Install Operator page, under A specific namespace on the cluster select openshift-ptp. Then, click Install.
Optional: Verify that the PTP Operator installed successfully:
Switch to the Operators → Installed Operators page.
Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.
During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
If the operator does not appear as installed, to troubleshoot further:
Go to the Operators → Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
Go to the Workloads → Pods page and check the logs for pods in the
openshift-ptp
project.
Automated discovery of PTP network devices
The PTP Operator adds the NodePtpDevice.ptp.openshift.io
custom resource definition (CRD) to OKD.
The PTP Operator searchs your cluster for PTP-capable network devices on each node. It creates and updates a NodePtpDevice
custom resource (CR) object for each node that provides a compatible PTP device.
One CR is created for each node and shares the same name as the node. The .status.devices
list provides information about the PTP devices on a node.
The following is an example of a NodePtpDevice
CR created by the PTP Operator:
apiVersion: ptp.openshift.io/v1
kind: NodePtpDevice
metadata:
creationTimestamp: "2019-11-15T08:57:11Z"
generation: 1
name: dev-worker-0 (1)
namespace: openshift-ptp (2)
resourceVersion: "487462"
selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0
uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f
spec: {}
status:
devices: (3)
- name: eno1
- name: eno2
- name: ens787f0
- name: ens787f1
- name: ens801f0
- name: ens801f1
- name: ens802f0
- name: ens802f1
- name: ens803
1 | The value for the name parameter is the same as the name of the node. |
2 | The CR is created in openshift-ptp namespace by PTP Operator. |
3 | The devices collection includes a list of the PTP capable devices discovered by the Operator on the node. |
To return a complete list of PTP capable network devices in your cluster, run the following command:
$ oc get NodePtpDevice -n openshift-ptp -o yaml
Configuring linuxptp services as ordinary clock
The PTP Operator adds the PtpConfig.ptp.openshift.io
custom resource definition (CRD) to OKD. You can configure the linuxptp
services (ptp4l
, phc2sys
) by creating a PtpConfig
custom resource (CR) object.
Prerequisites
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.Install the PTP Operator.
Procedure
Create the following
PtpConfig
CR, and then save the YAML in theordinary-clock-ptp-config.yaml
file.apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: ordinary-clock-ptp-config (1)
namespace: openshift-ptp
spec:
profile: (2)
- name: "profile1" (3)
interface: "ens787f1" (4)
ptp4lOpts: "-s -2" (5)
phc2sysOpts: "-a -r" (6)
ptp4lConf: "" (7)
recommend: (8)
- profile: "profile1" (9)
priority: 10 (10)
match: (11)
- nodeLabel: "node-role.kubernetes.io/worker" (12)
nodeName: "compute-0.example.com" (13)
1 The name of the PtpConfig
CR.2 Specify an array of one or more profile
objects.3 Specify the name of a profile object which uniquely identifies a profile object. 4 Specify the network interface name to use by the ptp4l
service, for exampleens787f1
.5 Specify system config options for the ptp4l
service, for example-s -2
. The options should not include the network interface name-i <interface>
and service config file-f /etc/ptp4l.conf
because the network interface name and the service config file are automatically appended.6 Specify system config options for the phc2sys
service, for example-a -r
. If this field is empty the PTP Operator does not start thephc2sys
service.7 Specify a string that contains the configuration to replace the default /etc/ptp4l.conf
file. To use the default configuration, leave the field empty.8 Specify an array of one or more recommend
objects which define rules on how theprofile
should be applied to nodes.9 Specify the profile
object name defined in theprofile
section.10 Specify the priority
with an integer value between0
and99
. A larger number gets lower priority, so a priority of99
is lower than a priority of10
. If a node can be matched with multiple profiles according to rules defined in thematch
field, the profile with the higher priority is applied to that node.11 Specify match
rules withnodeLabel
ornodeName
.12 Specify nodeLabel
with thekey
ofnode.Labels
from the node object.13 Specify nodeName
withnode.Name
from the node object.Create the CR by running the following command:
$ oc create -f ordinary-clock-ptp-config.yaml
Verification steps
Check that the
PtpConfig
profile is applied to node.Get the list of pods in the
openshift-ptp
namespace by running the following command:$ oc get pods -n openshift-ptp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com
linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com
ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
Check that the profile is correct. Examine the logs of the
linuxptp
daemon that corresponds to the node you specified in thePtpConfig
profile. Run the following command:$ oc logs linuxptp-daemon-4xkbb -n openshift-ptp
Example output
I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1
I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2
I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r
I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------
Configuring linuxptp services as boundary clock
The PTP Operator adds the PtpConfig.ptp.openshift.io
custom resource definition (CRD) to OKD. You can configure the linuxptp
services (ptp4l
, phc2sys
) by creating a PtpConfig
custom resource (CR) object.
Prerequisites
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.Install the PTP Operator.
Procedure
Create the following
PtpConfig
CR, and then save the YAML in theboundary-clock-ptp-config.yaml
file.apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: boundary-clock-ptp-config (1)
namespace: openshift-ptp
spec:
profile: (2)
- name: "profile1" (3)
interface: "" (4)
ptp4lOpts: "-s -2" (5)
ptp4lConf: | (6)
[ens1f0] (7)
masterOnly 0
[ens1f3] (8)
masterOnly 1
[global]
#
# Default Data Set
#
twoStepFlag 1
#slaveOnly 1
priority1 128
priority2 128
domainNumber 24
#utc_offset 37
clockClass 248
clockAccuracy 0xFE
offsetScaledLogVariance 0xFFFF
free_running 0
freq_est_interval 1
dscp_event 0
dscp_general 0
dataset_comparison G.8275.x
G.8275.defaultDS.localPriority 128
#
# Port Data Set
#
logAnnounceInterval -3
logSyncInterval -4
logMinDelayReqInterval -4
logMinPdelayReqInterval -4
announceReceiptTimeout 3
syncReceiptTimeout 0
delayAsymmetry 0
fault_reset_interval 4
neighborPropDelayThresh 20000000
masterOnly 0
G.8275.portDS.localPriority 128
#
# Run time options
#
assume_two_step 0
logging_level 6
path_trace_enabled 0
follow_up_info 0
hybrid_e2e 0
inhibit_multicast_service 0
net_sync_monitor 0
tc_spanning_tree 0
tx_timestamp_timeout 10
#was 1 (default !)
unicast_listen 0
unicast_master_table 0
unicast_req_duration 3600
use_syslog 1
verbose 0
summary_interval -4
kernel_leap 1
check_fup_sync 0
#
# Servo Options
#
pi_proportional_const 0.0
pi_integral_const 0.0
pi_proportional_scale 0.0
pi_proportional_exponent -0.3
pi_proportional_norm_max 0.7
pi_integral_scale 0.0
pi_integral_exponent 0.4
pi_integral_norm_max 0.3
step_threshold 0
first_step_threshold 0.00002
max_frequency 900000000
clock_servo pi
sanity_freq_limit 200000000
ntpshm_segment 0
#
# Transport options
#
transportSpecific 0x0
ptp_dst_mac 01:1B:19:00:00:00
p2p_dst_mac 01:80:C2:00:00:0E
udp_ttl 1
udp6_scope 0x0E
uds_address /var/run/ptp4l
#
# Default interface options
#
clock_type OC
network_transport UDPv4
delay_mechanism E2E
time_stamping hardware
tsproc_mode filter
delay_filter moving_median
delay_filter_length 10
egressLatency 0
ingressLatency 0
boundary_clock_jbod 1
#
# Clock description
#
productDescription ;;
revisionData ;;
manufacturerIdentity 00:00:00
userDescription ;
timeSource 0xA0
phc2sysOpts: "-a -r" (9)
recommend: (10)
- profile: "profile1" (11)
priority: 10 (12)
match: (13)
- nodeLabel: "node-role.kubernetes.io/worker" (14)
nodeName: "compute-0.example.com" (15)
1 The name of the PtpConfig
CR.2 Specify an array of one or more profile
objects.3 Specify the name of a profile object which uniquely identifies a profile object. 4 This field should remain empty for boundary clock. 5 Specify system config options for the ptp4l
service, for example-s -2
. The options should not include the network interface name-i <interface>
and service config file-f /etc/ptp4l.conf
because the network interface name and the service config file are automatically appended.6 Specify the needed configuration to start ptp4l
as boundary clock. For example,ens1f0
synchronizes from a grandmaster clock andens1f3
synchronizes connected devices.7 The interface name to synchronize from. 8 The interface to synchronize devices connected to the interface. 9 Specify system config options for the phc2sys
service, for example-a -r
. If this field is empty the PTP Operator does not start thephc2sys
service.10 Specify an array of one or more recommend
objects which define rules on how theprofile
should be applied to nodes.11 Specify the profile
object name defined in theprofile
section.12 Specify the priority
with an integer value between0
and99
. A larger number gets lower priority, so a priority of99
is lower than a priority of10
. If a node can be matched with multiple profiles according to rules defined in thematch
field, the profile with the higher priority is applied to that node.13 Specify match
rules withnodeLabel
ornodeName
.14 Specify nodeLabel
with thekey
ofnode.Labels
from the node object.15 Specify nodeName
withnode.Name
from the node object.Create the CR by running the following command:
$ oc create -f boundary-clock-ptp-config.yaml
Verification steps
Check that the
PtpConfig
profile is applied to node.Get the list of pods in the
openshift-ptp
namespace by running the following command:$ oc get pods -n openshift-ptp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com
linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com
ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
Check that the profile is correct. Examine the logs of the
linuxptp
daemon that corresponds to the node you specified in thePtpConfig
profile. Run the following command:$ oc logs linuxptp-daemon-4xkbb -n openshift-ptp
Example output
I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
I1115 09:41:17.117616 4143292 daemon.go:102] Interface:
I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2
I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r
I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------
Troubleshooting common PTP Operator issues
Troubleshoot common problems with the PTP Operator by performing the following steps.
Prerequisites
Install the OKD CLI (
oc
).Log in as a user with
cluster-admin
privileges.Install the PTP Operator on a bare-metal cluster with hosts that support PTP.
Procedure
Check the Operator and operands are successfully deployed in the cluster for the configured nodes.
$ oc get pods -n openshift-ptp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com
linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com
ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com
When the PTP fast event bus is enabled, the number of ready
linuxptp-daemon
pods is3/3
. If the PTP fast event bus is not enabled,2/2
is displayed.Check that supported hardware is found in the cluster.
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io
Example output
NAME AGE
control-plane-0.example.com 10d
control-plane-1.example.com 10d
compute-0.example.com 10d
compute-1.example.com 10d
compute-2.example.com 10d
Check the available PTP network interfaces for a node:
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml
where:
<node_name>
Specifies the node you want to query, for example,
compute-0.example.com
.Example output
apiVersion: ptp.openshift.io/v1
kind: NodePtpDevice
metadata:
creationTimestamp: "2021-09-14T16:52:33Z"
generation: 1
name: compute-0.example.com
namespace: openshift-ptp
resourceVersion: "177400"
uid: 30413db0-4d8d-46da-9bef-737bacd548fd
spec: {}
status:
devices:
- name: eno1
- name: eno2
- name: eno3
- name: eno4
- name: enp5s0f0
- name: enp5s0f1
Check that the PTP interface is successfully synchronized to the primary clock by accessing the
linuxptp-daemon
pod for the corresponding node.Get the name of the
linuxptp-daemon
pod and corresponding node you want to troubleshoot by running the following command:$ oc get pods -n openshift-ptp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com
linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com
ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com
Remote shell into the required
linuxptp-daemon
container:$ oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>
where:
<linux_daemon_container>
is the container you want to diagnose, for example
linuxptp-daemon-lmvgn
.In the remote shell connection to the
linuxptp-daemon
container, use the PTP Management Client (pmc
) tool to diagnose the network interface. Run the followingpmc
command to check the sync status of the PTP device, for exampleptp4l
.# pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'
Example output when the node is successfully synced to the primary clock
sending: GET PORT_DATA_SET
40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET
portIdentity 40a6b7.fffe.166ef0-1
portState SLAVE
logMinDelayReqInterval -4
peerMeanPathDelay 0
logAnnounceInterval -3
announceReceiptTimeout 3
logSyncInterval -4
delayMechanism 1
logMinPdelayReqInterval -4
versionNumber 2
About PTP and clock synchronization error events
Cloud native applications such as virtual RAN require access to notifications about hardware timing events that are critical to the functioning of the overall network. Fast event notifications are early warning signals about impending and real-time Precision Time Protocol (PTP) clock synchronization events. PTP clock synchronization errors can negatively affect the performance and reliability of your low latency application, for example, a vRAN application running in a distributed unit (DU).
Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU.
Event notifications are available to RAN applications running on the same DU node. A publish/subscribe REST API passes events notifications to the messaging bus. Publish/subscribe messaging, or pub/sub messaging, is an asynchronous service to service communication architecture where any message published to a topic is immediately received by all the subscribers to the topic.
Fast event notifications are generated by the PTP Operator in OKD for every PTP-capable network interface. The events are made available using a cloud-event-proxy
sidecar container over an Advanced Message Queuing Protocol (AMQP) message bus. The AMQP message bus is provided by the AMQ Interconnect Operator.
PTP fast event notifications are available only for network interfaces configured to use PTP ordinary clocks. |
About the PTP fast event notifications framework
You can subscribe Distributed unit (DU) applications to Precision Time Protocol (PTP) fast events notifications that are generated by OKD with the PTP Operator and cloud-event-proxy
sidecar container. You enable the cloud-event-proxy
sidecar container by setting the enableEventPublisher
field to true
in the ptpOperatorConfig
custom resource (CR) and specifying a transportHost
address. PTP fast events use an Advanced Message Queuing Protocol (AMQP) event notification bus provided by the AMQ Interconnect Operator. AMQ Interconnect is a component of Red Hat AMQ, a messaging router that provides flexible routing of messages between any AMQP-enabled endpoints.
The cloud-event-proxy
sidecar container can access the same resources as the primary vRAN application without using any of the resources of the primary application and with no significant latency.
The fast events notifications framework uses a REST API for communication and is based on the O-RAN REST API specification. The framework consists of a publisher, subscriber, and an AMQ messaging bus to handle communications between the publisher and subscriber applications. The cloud-event-proxy
sidecar is a utility container that runs in a pod that is loosely coupled to the main DU application container on the DU node. It provides an event publishing framework that allows you to subscribe DU applications to published PTP events.
DU applications run the cloud-event-proxy
container in a sidecar pattern to subscribe to PTP events. The following workflow describes how a DU application uses PTP fast events:
DU application requests a subscription: The DU sends an API request to the
cloud-event-proxy
sidecar to create a PTP events subscription. Thecloud-event-proxy
sidecar creates a subscription resource.cloud-event-proxy sidecar creates the subscription: The event resource is persisted by the
cloud-event-proxy
sidecar. Thecloud-event-proxy
sidecar container sends an acknowledgment with an ID and URL location to access the stored subscription resource. The sidecar creates an AMQ messaging listener protocol for the resource specified in the subscription.DU application receives the PTP event notification: The
cloud-event-proxy
sidecar container listens to the address specified in the resource qualifier. The DU events consumer processes the message and passes it to the return URL specified in the subscription.cloud-event-proxy sidecar validates the PTP event and posts it to the DU application: The
cloud-event-proxy
sidecar receives the event, unwraps the cloud events object to retrieve the data, and fetches the return URL to post the event back to the DU consumer application.DU application uses the PTP event: The DU application events consumer receives and processes the PTP event.
Installing the AMQ messaging bus
To pass PTP fast event notifications between publisher and subscriber on a node, you must install and configure an AMQ messaging bus to run locally on the node. You do this by installing the AMQ Interconnect Operator for use in the cluster.
Prerequisites
Install the OKD CLI (
oc
).Log in as a user with
cluster-admin
privileges.
Procedure
- Install the AMQ Interconnect Operator to its own
amq-interconnect
namespace. See Installing the AMQ Interconnect Operator.
Verification
Check that the AMQ Interconnect Operator is available and the required pods are running:
$ oc get pods -n amq-interconnect
Example output
NAME READY STATUS RESTARTS AGE
amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h
interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h
Check that the required
linuxptp-daemon
PTP event producer pods are running in theopenshift-ptp
namespace.$ oc get pods -n openshift-ptp
Example output
NAME READY STATUS RESTARTS AGE
linuxptp-daemon-2t78p 3/3 Running 0 12h
linuxptp-daemon-k8n88 3/3 Running 0 12h
Configuring the PTP fast event notifications publisher
To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig
custom resource (CR) and configure ptpClockThreshold
values in a PtpConfig
CR that you create.
Prerequisites
Install the OKD CLI (
oc
).Log in as a user with
cluster-admin
privileges.Install the PTP Operator and AMQ Interconnect Operator.
Procedure
Modify the
spec.ptpEventConfig
field of thePtpOperatorConfig
resource and set appropriate values by running the following command:$ oc edit PtpOperatorConfig default -n openshift-ptp
...
spec:
daemonNodeSelector:
node-role.kubernetes.io/worker: ""
ptpEventConfig:
enableEventPublisher: true (1)
transportHost: amqp://<instance_name>.<namespace>.svc.cluster.local (2)
1 Set enableEventPublisher
totrue
to enable PTP fast event notifications.2 Set transportHost
to the AMQ router you configured where<instance_name>
and<namespace>
correspond to the AMQ Interconnect router instance name and namespace, for example,amqp://amq-interconnect.amq-interconnect.svc.cluster.local
Create a
PtpConfig
custom resource for the PTP enabled interface, and set the required values forptpClockThreshold
, for example:apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: example-ptpconfig
namespace: openshift-ptp
spec:
profile:
- name: "profile1"
interface: "enp5s0f0"
ptp4lOpts: "-2 -s --summary_interval -4"
phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"
ptpClockThreshold:
holdOverTimeout: 5 (1)
maxOffsetThreshold: 100 (2)
minOffsetThreshold: -100 (3)
recommend:
- profile: "profile1"
priority: 4
match:
- nodeLabel: "node-role.kubernetes.io/worker"
1 Number of seconds to stay in the clock holdover state. Holdover state is the period between local and master clock synchronizations. 2 Maximum offset value in nanoseconds. Offset is the time difference between the local and master clock. 3 Minimum offset value in nanoseconds.
PTP fast event notifications REST API reference
You can use the PTP fast event notifications REST API to subscribe an application to the PTP events that are generated for the parent node. PTP fast events notifications are available on each node where a PTP capable network interface is configured.
You can subscribe DU applications to PTP event notifications by using the resource address /cluster/node/<node_name>/ptp
, where <node_name>
is the cluster node running the DU application.
Status request are sent by making a REST API PUT
call to /subscriptions/status/<subscription_id>
. The API call returns events through the AMQP event bus to the subscribed address.
The PTP fast events notifications REST API is served at [http://localhost:8080/](http://localhost:8080/)
by default. The following API endpoints are available:
/api/cloudNotifications/v1/publishers
POST
: Creates a new publisherGET
: Retrieves a list of publishers
/api/cloudNotifications/v1/publishers/<publisher_id>
GET
: Creates a new status ping request for the specified publisher id
/api/cloudNotifications/v1/subscriptions
POST
: Creates a new subscriptionGET
: Retrieves a list of subscriptions
/api/cloudNotifications/v1/subscriptions/<subscription_id>
GET
: Creates a new status ping request for the specified subscription id
api/cloudNotifications/v1/subscriptions/status/<subscription_id>
PUT
: Creates a new status ping request for the specified subscription id
/api/cloudNotifications/v1/health
GET
: Returns the health status ofcloudNotifications
API
api/cloudNotifications/v1/publishers
HTTP method
POST api/cloudNotifications/v1/publishers
Description
Creates a new publisher. If publisher creation is successful, or if it already exists, a 201 Created
status code is returned.
Parameter | Type |
---|---|
publisher | data |
Example payload
{
"id": "56e8a064-dc4b-4428-8085-91c18ea07930",
"endpointUri": "http://localhost:8080/api/cloudNotifications/v1/dummy",
"uriLocation": "http://localhost:8080/api/cloudNotifications/v1/publishers/56e8a064-dc4b-4428-8085-91c18ea07930",
"resource": "/cluster/node/compute-1.example.com/ptp"
}
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location --request POST 'http://localhost:8080/api/cloudNotifications/v1/publishers' --header 'Content-Type: application/json' --insecure --data ' {
"id": "56e8a064-dc4b-4428-8085-91c18ea07930",
"endpointUri": "http://localhost:8080/api/cloudNotifications/v1/dummy",
"uriLocation": "http://localhost:8080/api/cloudNotifications/v1/publishers/56e8a064-dc4b-4428-8085-91c18ea07930",
"resource": "/cluster/node/compute-1.example.com/ptp"
}'
HTTP method
GET api/cloudNotifications/v1/publishers
Description
Returns a list of publishers. If publishers exist, a 200 OK
status code is returned along with the list of publishers.
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location http://localhost:8080/api/cloudNotifications/v1/publishers
Example return
[
{
"id": "56e8a064-dc4b-4428-8085-91c18ea07930",
"endpointUri": "http://localhost:8080/api/cloudNotifications/v1/dummy",
"uriLocation": "http://localhost:8080/api/cloudNotifications/v1/publishers/56e8a064-dc4b-4428-8085-91c18ea07930",
"resource": "/cluster/node/compute-1.example.com/ptp"
}
]
api/cloudNotifications/v1/publishers/<publisher_id>
HTTP method
GET api/cloudNotifications/v1/publishers/<publisher_id>
Description
Returns the publisher with id <publisher_id>
.
Parameter | Type |
---|---|
| string |
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location http://localhost:8080/api/cloudNotifications/v1/publishers/56e8a064-dc4b-4428-8085-91c18ea07930
Example return
{
"id":"56e8a064-dc4b-4428-8085-91c18ea07930",
"endpointUri":"http://localhost:8080/api/cloudNotifications/v1/dummy",
"uriLocation":"http://localhost:8080/api/cloudNotifications/v1/publishers/56e8a064-dc4b-4428-8085-91c18ea07930",
"resource":"/cluster/node/compute-1.example.com/ptp"
}
api/cloudNotifications/v1/subscriptions
HTTP method
GET api/cloudNotifications/v1/subscriptions
Description
Returns a list of subscriptions. If subscriptions exist, a 200 OK
status code is returned along with the list of subscriptions.
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location http://localhost:8080/api/cloudNotifications/v1/subscriptions
Example return
[
{
"id": "75b1ad8f-c807-4c23-acf5-56f4b7ee3826",
"endpointUri": "http://localhost:8080/api/cloudNotifications/v1/dummy",
"uriLocation": "http://localhost:8080/api/cloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826",
"resource": "/cluster/node/compute-1.example.com/ptp"
}
]
HTTP method
POST api/cloudNotifications/v1/subscriptions
Description
Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created
status code is returned.
Parameter | Type |
---|---|
subscription | data |
Example payload
{
"id": "56e8a064-dc4b-4428-8085-91c18ea07930",
"endpointUri": "http://localhost:8080/api/cloudNotifications/v1/dummy",
"uriLocation": "http://localhost:8080/api/cloudNotifications/v1/subscriptions/56e8a064-dc4b-4428-8085-91c18ea07930",
"resource": "/cluster/node/compute-1.example.com/ptp"
}
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location --request POST 'http://localhost:8080/api/cloudNotifications/v1/subscriptions' --header 'Content-Type: application/json' --insecure --data ' {
"id": "56e8a064-dc4b-4428-8085-91c18ea07930",
"endpointUri": "http://localhost:8080/api/cloudNotifications/v1/dummy",
"uriLocation": "http://localhost:8080/api/cloudNotifications/v1/subscriptions/75b1ad8f-dc4b-4428-8085-91c18ea07930",
"resource": "/cluster/node/compute-1.example.com/ptp"
}'
api/cloudNotifications/v1/subscriptions/<subscription_id>
HTTP method
GET api/cloudNotifications/v1/subscriptions/<subscription_id>
Description
Returns details for the subscription with id <subscription_id>
Parameter | Type |
---|---|
| string |
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location http://localhost:8080/api/cloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab
Example return
{"id":"48210fb3-45be-4ce0-aa9b-41a0e58730ab","endpointUri":"http://localhost:8080/api/cloudNotifications/v1/dummy","uriLocation":"http://localhost:8080/api/cloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab","resource":"/cluster/node/compute-1.example.com/ptp"}
api/cloudNotifications/v1/subscriptions/status/<subscription_id>
HTTP method
PUT api/cloudNotifications/v1/subscriptions/status/<subscription_id>
Description
Creates a new status ping request for subscription with id <subscription_id>
. If a subscription is present, the status request is successful and a 202 Accepted
status code is returned.
Parameter | Type |
---|---|
| string |
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location --request PUT http://localhost:8080/api/cloudNotifications/v1/subscriptions/status/48210fb3-45be-4ce0-aa9b-41a0e58730ab
Example output
{"status":"ping sent"}
api/cloudNotifications/v1/health/
HTTP method
GET api/cloudNotifications/v1/health/
Description
Returns the health status for the cloudNotifications
REST API.
Example oc exec curl command
$ oc exec -it linuxptp-daemon-5j265 -n openshift-ptp -c cloud-event-proxy -- curl --location http://localhost:8080/api/cloudNotifications/v1/health
Example return
OK
Monitoring PTP fast event metrics using the CLI
You can monitor fast events bus metrics directly from cloud-event-proxy
containers using the oc
CLI.
PTP fast event notification metrics are also available in the OKD web console. |
Prerequisites
Install the OKD CLI (
oc
).Log in as a user with
cluster-admin
privileges.Install and configure the PTP Operator.
Procedure
Get the list of active
linuxptp-daemon
pods.$ oc get pods -n openshift-ptp
Example output
NAME READY STATUS RESTARTS AGE
linuxptp-daemon-2t78p 3/3 Running 0 8h
linuxptp-daemon-k8n88 3/3 Running 0 8h
Access the metrics for the required
cloud-event-proxy
container by running the following command:$ oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics
where:
<linuxptp-daemon>
Specifies the pod you want to query, for example,
linuxptp-daemon-2t78p
.Example output
# HELP cne_amqp_events_published Metric to get number of events published by the transport
# TYPE cne_amqp_events_published gauge
cne_amqp_events_published{address="/cluster/node/compute-1.example.com/ptp/status",status="success"} 1041
# HELP cne_amqp_events_received Metric to get number of events received by the transport
# TYPE cne_amqp_events_received gauge
cne_amqp_events_received{address="/cluster/node/compute-1.example.com/ptp",status="success"} 1019
# HELP cne_amqp_receiver Metric to get number of receiver created
# TYPE cne_amqp_receiver gauge
cne_amqp_receiver{address="/cluster/node/mock",status="active"} 1
cne_amqp_receiver{address="/cluster/node/compute-1.example.com/ptp",status="active"} 1
cne_amqp_receiver{address="/cluster/node/compute-1.example.com/redfish/event",status="active"}
...
Monitoring PTP fast event metrics in the web console
You can monitor PTP fast event metrics in the OKD web console by using the pre-configured and self-updating Prometheus monitoring stack.
Prerequisites
Install the OKD CLI
oc
.Log in as a user with
cluster-admin
privileges.
Procedure
Enter the following command to return the list of available PTP metrics from the
cloud-event-proxy
sidecar container:$ oc exec -it <linuxptp_daemon_pod> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics
where:
<linuxptp_daemon_pod>
Specifies the pod you want to query, for example,
linuxptp-daemon-2t78p
.Copy the name of the PTP metric you want to query from the list of returned metrics, for example,
cne_amqp_events_received
.In the OKD web console, click Observe → Metrics.
Paste the PTP metric into the Expression field, and click Run queries.
Additional resources