- Network APIs
- ClusterNetwork [network.openshift.io/v1]
- Endpoints [core/v1]
- EndpointSlice [discovery.k8s.io/v1beta1]
- EgressNetworkPolicy [network.openshift.io/v1]
- HostSubnet [network.openshift.io/v1]
- Ingress [networking.k8s.io/v1]
- IngressClass [networking.k8s.io/v1]
- IPPool [whereabouts.cni.cncf.io/v1alpha1]
- NetNamespace [network.openshift.io/v1]
- NetworkAttachmentDefinition [k8s.cni.cncf.io/v1]
- NetworkPolicy [networking.k8s.io/v1]
- Route [route.openshift.io/v1]
- Service [core/v1]
Network APIs
ClusterNetwork [network.openshift.io/v1]
Description
ClusterNetwork describes the cluster network. There is normally only one object of this type, named “default”, which is created by the SDN network plugin based on the master configuration when the cluster is brought up for the first time.
Type
object
Endpoints [core/v1]
Description
Endpoints is a collection of endpoints that implement the actual service. Example: Name: “mysvc”, Subsets: [ { Addresses: [{“ip”: “10.10.1.1”}, {“ip”: “10.10.2.2”}], Ports: [{“name”: “a”, “port”: 8675}, {“name”: “b”, “port”: 309}] }, { Addresses: [{“ip”: “10.10.3.3”}], Ports: [{“name”: “a”, “port”: 93}, {“name”: “b”, “port”: 76}] }, ]
Type
object
EndpointSlice [discovery.k8s.io/v1beta1]
Description
EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints.
Type
object
EgressNetworkPolicy [network.openshift.io/v1]
Description
EgressNetworkPolicy describes the current egress network policy for a Namespace. When using the ‘redhat/openshift-ovs-multitenant’ network plugin, traffic from a pod to an IP address outside the cluster will be checked against each EgressNetworkPolicyRule in the pod’s namespace’s EgressNetworkPolicy, in order. If no rule matches (or no EgressNetworkPolicy is present) then the traffic will be allowed by default.
Type
object
HostSubnet [network.openshift.io/v1]
Description
HostSubnet describes the container subnet network on a node. The HostSubnet object must have the same name as the Node object it corresponds to.
Type
object
Ingress [networking.k8s.io/v1]
Description
Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.
Type
object
IngressClass [networking.k8s.io/v1]
Description
IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class
annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class.
Type
object
IPPool [whereabouts.cni.cncf.io/v1alpha1]
Description
IPPool is the Schema for Whereabouts for IP address allocation
Type
object
NetNamespace [network.openshift.io/v1]
Description
NetNamespace describes a single isolated network. When using the redhat/openshift-ovs-multitenant plugin, every Namespace will have a corresponding NetNamespace object with the same name. (When using redhat/openshift-ovs-subnet, NetNamespaces are not used.)
Type
object
NetworkAttachmentDefinition [k8s.cni.cncf.io/v1]
Description
NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec
Type
object
NetworkPolicy [networking.k8s.io/v1]
Description
NetworkPolicy describes what network traffic is allowed for a set of Pods
Type
object
Route [route.openshift.io/v1]
Description
A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints.
Once a route is created, the host
field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts.
Routers are subject to additional customization and may support additional controls via the annotations field.
Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen.
To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend.
Type
object
Service [core/v1]
Description
Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.
Type
object