This release has the following Validation Authority known issues.
- Temporary Kubernetes pods may run after command completion (ATEAM-16336)
- Newly deployed status after command execution (ATEAM-16337)
- Large kmdata files not supported (ATEAM-16338)
- Database validation error (ATEAM-17466)
- evactl logs not forwarded to Splunk (EDM-13275)
- Running shims not moved after a node dies (PKIPM-1090)
Temporary Kubernetes pods may run after command completion (ATEAM-16336)
Temporary Kubernetes pods may run after the completion of an evactl
command. These pods will be deleted when deploying and do not compromise the Entrust Validation Authority operation or the execution of more evactl
commands.
Newly deployed status after command execution (ATEAM-16337)
After running some evactl
commands, the Management Console Entrust Validation Authority as newly deployed.
Large kmdata files not supported (ATEAM-16338)
The evactl import-nshield command does not support kmdata
files larger than ~100KB.
Database validation error (ATEAM-17466)
The Management Console displays a validation error when:
- Importing a configuration file containing a non-empty database
sslValidationCert
value. - Setting the SSL Mode database configuration to disable.
Workaround: Delete the sslValidationCert
value in the configuration file before importing it.
evactl logs not forwarded to Splunk (EDM-13275)
When integrated with a Splunk server, Cryptographic Security Platform does not forward logs recording evactl
commands. However, these logs can be browsed using the Grafana portal.
See Managing Log Forwarder for integrating a Splunk server or Browsing logs with Grafana for browsing logs in the Grafana portal.
Running shims not moved after a node dies (PKIPM-1090)
When a node dies, Entrust Validation Authority does not move pod running shims to an alive node. Therefore, these shims stop updating the database.
Workaround: Wait until the dead node returns, or kill the pod as follows.
List the pods.
sudo kubectl get pods -n eva -o wide
Kill the dead pod. For example:
sudo kubectl -n eva delete pod --force eva-cagwshim-n-0