# Production-ready Features Spring Boot includes a number of additional features to help you monitor and manage your application when you push it to production. You can choose to manage and monitor your application by using HTTP endpoints or with JMX. Auditing, health, and metrics gathering can also be automatically applied to your application. ## 1. Enabling Production-ready Features The [`spring-boot-actuator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator) module provides all of Spring Boot’s production-ready features. The recommended way to enable the features is to add a dependency on the `spring-boot-starter-actuator` “Starter”. Definition of Actuator An actuator is a manufacturing term that refers to a mechanical device for moving or controlling something. Actuators can generate a large amount of motion from a small change. To add the actuator to a Maven-based project, add the following ‘Starter’ dependency: ``` org.springframework.boot spring-boot-starter-actuator ``` For Gradle, use the following declaration: ``` dependencies { implementation 'org.springframework.boot:spring-boot-starter-actuator' } ``` ## 2. Endpoints Actuator endpoints let you monitor and interact with your application. Spring Boot includes a number of built-in endpoints and lets you add your own. For example, the `health` endpoint provides basic application health information. You can [enable or disable](#actuator.endpoints.enabling) each individual endpoint and [expose them (make them remotely accessible) over HTTP or JMX](#actuator.endpoints.exposing). An endpoint is considered to be available when it is both enabled and exposed. The built-in endpoints are auto-configured only when they are available. Most applications choose exposure over HTTP, where the ID of the endpoint and a prefix of `/actuator` is mapped to a URL. For example, by default, the `health` endpoint is mapped to `/actuator/health`. | |To learn more about the Actuator’s endpoints and their request and response formats, see the separate API documentation ([HTML](https://docs.spring.io/spring-boot/docs/2.6.4/actuator-api/htmlsingle) or [PDF](https://docs.spring.io/spring-boot/docs/2.6.4/actuator-api/pdf/spring-boot-actuator-web-api.pdf)).| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| The following technology-agnostic endpoints are available: | ID | Description | |------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `auditevents` | Exposes audit events information for the current application.
Requires an `AuditEventRepository` bean. | | `beans` | Displays a complete list of all the Spring beans in your application. | | `caches` | Exposes available caches. | | `conditions` | Shows the conditions that were evaluated on configuration and auto-configuration classes and the reasons why they did or did not match. | | `configprops` | Displays a collated list of all `@ConfigurationProperties`. | | `env` | Exposes properties from Spring’s `ConfigurableEnvironment`. | | `flyway` | Shows any Flyway database migrations that have been applied.
Requires one or more `Flyway` beans. | | `health` | Shows application health information. | | `httptrace` | Displays HTTP trace information (by default, the last 100 HTTP request-response exchanges).
Requires an `HttpTraceRepository` bean. | | `info` | Displays arbitrary application info. | |`integrationgraph`| Shows the Spring Integration graph.
Requires a dependency on `spring-integration-core`. | | `loggers` | Shows and modifies the configuration of loggers in the application. | | `liquibase` | Shows any Liquibase database migrations that have been applied.
Requires one or more `Liquibase` beans. | | `metrics` | Shows “metrics” information for the current application. | | `mappings` | Displays a collated list of all `@RequestMapping` paths. | | `quartz` | Shows information about Quartz Scheduler jobs. | | `scheduledtasks` | Displays the scheduled tasks in your application. | | `sessions` | Allows retrieval and deletion of user sessions from a Spring Session-backed session store.
Requires a servlet-based web application that uses Spring Session. | | `shutdown` | Lets the application be gracefully shutdown.
Disabled by default. | | `startup` |Shows the [startup steps data](features.html#features.spring-application.startup-tracking) collected by the `ApplicationStartup`.
Requires the `SpringApplication` to be configured with a `BufferingApplicationStartup`.| | `threaddump` | Performs a thread dump. | If your application is a web application (Spring MVC, Spring WebFlux, or Jersey), you can use the following additional endpoints: | ID | Description | |------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `heapdump` | Returns a heap dump file.
On a HotSpot JVM, an `HPROF`-format file is returned.
On an OpenJ9 JVM, a `PHD`-format file is returned. | | `jolokia` | Exposes JMX beans over HTTP when Jolokia is on the classpath (not available for WebFlux).
Requires a dependency on `jolokia-core`. | | `logfile` |Returns the contents of the logfile (if the `logging.file.name` or the `logging.file.path` property has been set).
Supports the use of the HTTP `Range` header to retrieve part of the log file’s content.| |`prometheus`| Exposes metrics in a format that can be scraped by a Prometheus server.
Requires a dependency on `micrometer-registry-prometheus`. | ### 2.1. Enabling Endpoints By default, all endpoints except for `shutdown` are enabled. To configure the enablement of an endpoint, use its `management.endpoint..enabled` property. The following example enables the `shutdown` endpoint: Properties ``` management.endpoint.shutdown.enabled=true ``` Yaml ``` management: endpoint: shutdown: enabled: true ``` If you prefer endpoint enablement to be opt-in rather than opt-out, set the `management.endpoints.enabled-by-default` property to `false` and use individual endpoint `enabled` properties to opt back in. The following example enables the `info` endpoint and disables all other endpoints: Properties ``` management.endpoints.enabled-by-default=false management.endpoint.info.enabled=true ``` Yaml ``` management: endpoints: enabled-by-default: false endpoint: info: enabled: true ``` | |Disabled endpoints are removed entirely from the application context.
If you want to change only the technologies over which an endpoint is exposed, use the [`include` and `exclude` properties](#actuator.endpoints.exposing) instead.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ### 2.2. Exposing Endpoints Since Endpoints may contain sensitive information, you should carefully consider when to expose them. The following table shows the default exposure for the built-in endpoints: | ID |JMX|Web| |------------------|---|---| | `auditevents` |Yes|No | | `beans` |Yes|No | | `caches` |Yes|No | | `conditions` |Yes|No | | `configprops` |Yes|No | | `env` |Yes|No | | `flyway` |Yes|No | | `health` |Yes|Yes| | `heapdump` |N/A|No | | `httptrace` |Yes|No | | `info` |Yes|No | |`integrationgraph`|Yes|No | | `jolokia` |N/A|No | | `logfile` |N/A|No | | `loggers` |Yes|No | | `liquibase` |Yes|No | | `metrics` |Yes|No | | `mappings` |Yes|No | | `prometheus` |N/A|No | | `quartz` |Yes|No | | `scheduledtasks` |Yes|No | | `sessions` |Yes|No | | `shutdown` |Yes|No | | `startup` |Yes|No | | `threaddump` |Yes|No | To change which endpoints are exposed, use the following technology-specific `include` and `exclude` properties: | Property |Default | |-------------------------------------------|--------| |`management.endpoints.jmx.exposure.exclude`| | |`management.endpoints.jmx.exposure.include`| `*` | |`management.endpoints.web.exposure.exclude`| | |`management.endpoints.web.exposure.include`|`health`| The `include` property lists the IDs of the endpoints that are exposed. The `exclude` property lists the IDs of the endpoints that should not be exposed. The `exclude` property takes precedence over the `include` property. You can configure both the `include` and the `exclude` properties with a list of endpoint IDs. For example, to stop exposing all endpoints over JMX and only expose the `health` and `info` endpoints, use the following property: Properties ``` management.endpoints.jmx.exposure.include=health,info ``` Yaml ``` management: endpoints: jmx: exposure: include: "health,info" ``` `*` can be used to select all endpoints. For example, to expose everything over HTTP except the `env` and `beans` endpoints, use the following properties: Properties ``` management.endpoints.web.exposure.include=* management.endpoints.web.exposure.exclude=env,beans ``` Yaml ``` management: endpoints: web: exposure: include: "*" exclude: "env,beans" ``` | |`*` has a special meaning in YAML, so be sure to add quotation marks if you want to include (or exclude) all endpoints.| |---|-----------------------------------------------------------------------------------------------------------------------| | |If your application is exposed publicly, we strongly recommend that you also [secure your endpoints](#actuator.endpoints.security).| |---|-----------------------------------------------------------------------------------------------------------------------------------| | |If you want to implement your own strategy for when endpoints are exposed, you can register an `EndpointFilter` bean.| |---|---------------------------------------------------------------------------------------------------------------------| ### 2.3. Security For security purposes, all actuators other than `/health` are disabled by default. You can use the `management.endpoints.web.exposure.include` property to enable the actuators. | |Before setting the `management.endpoints.web.exposure.include`, ensure that the exposed actuators do not contain sensitive information, are secured by placing them behind a firewall, or are secured by something like Spring Security.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| If Spring Security is on the classpath and no other `WebSecurityConfigurerAdapter` or `SecurityFilterChain` bean is present, all actuators other than `/health` are secured by Spring Boot auto-configuration. If you define a custom `WebSecurityConfigurerAdapter` or `SecurityFilterChain` bean, Spring Boot auto-configuration backs off and lets you fully control the actuator access rules. If you wish to configure custom security for HTTP endpoints (for example, to allow only users with a certain role to access them), Spring Boot provides some convenient `RequestMatcher` objects that you can use in combination with Spring Security. A typical Spring Security configuration might look something like the following example: ``` import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.web.SecurityFilterChain; @Configuration(proxyBeanMethods = false) public class MySecurityConfiguration { @Bean public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { http.requestMatcher(EndpointRequest.toAnyEndpoint()) .authorizeRequests((requests) -> requests.anyRequest().hasRole("ENDPOINT_ADMIN")); http.httpBasic(); return http.build(); } } ``` The preceding example uses `EndpointRequest.toAnyEndpoint()` to match a request to any endpoint and then ensures that all have the `ENDPOINT_ADMIN` role. Several other matcher methods are also available on `EndpointRequest`. See the API documentation ([HTML](https://docs.spring.io/spring-boot/docs/2.6.4/actuator-api/htmlsingle) or [PDF](https://docs.spring.io/spring-boot/docs/2.6.4/actuator-api/pdf/spring-boot-actuator-web-api.pdf)) for details. If you deploy applications behind a firewall, you may prefer that all your actuator endpoints can be accessed without requiring authentication. You can do so by changing the `management.endpoints.web.exposure.include` property, as follows: Properties ``` management.endpoints.web.exposure.include=* ``` Yaml ``` management: endpoints: web: exposure: include: "*" ``` Additionally, if Spring Security is present, you would need to add custom security configuration that allows unauthenticated access to the endpoints, as the following example shows: ``` import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.web.SecurityFilterChain; @Configuration(proxyBeanMethods = false) public class MySecurityConfiguration { @Bean public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { http.requestMatcher(EndpointRequest.toAnyEndpoint()) .authorizeRequests((requests) -> requests.anyRequest().permitAll()); return http.build(); } } ``` | |In both of the preceding examples, the configuration applies only to the actuator endpoints.
Since Spring Boot’s security configuration backs off completely in the presence of any `SecurityFilterChain` bean, you need to configure an additional `SecurityFilterChain` bean with rules that apply to the rest of the application.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 2.3.1. Cross Site Request Forgery Protection Since Spring Boot relies on Spring Security’s defaults, CSRF protection is turned on by default. This means that the actuator endpoints that require a `POST` (shutdown and loggers endpoints), a `PUT`, or a `DELETE` get a 403 (forbidden) error when the default security configuration is in use. | |We recommend disabling CSRF protection completely only if you are creating a service that is used by non-browser clients.| |---|-------------------------------------------------------------------------------------------------------------------------| You can find additional information about CSRF protection in the [Spring Security Reference Guide](https://docs.spring.io/spring-security/reference/5.6.2/features/exploits/csrf.html). ### 2.4. Configuring Endpoints Endpoints automatically cache responses to read operations that do not take any parameters. To configure the amount of time for which an endpoint caches a response, use its `cache.time-to-live` property. The following example sets the time-to-live of the `beans` endpoint’s cache to 10 seconds: Properties ``` management.endpoint.beans.cache.time-to-live=10s ``` Yaml ``` management: endpoint: beans: cache: time-to-live: "10s" ``` | |The `management.endpoint.` prefix uniquely identifies the endpoint that is being configured.| |---|--------------------------------------------------------------------------------------------------| ### 2.5. Hypermedia for Actuator Web Endpoints A “discovery page” is added with links to all the endpoints. The “discovery page” is available on `/actuator` by default. To disable the “discovery page”, add the following property to your application properties: Properties ``` management.endpoints.web.discovery.enabled=false ``` Yaml ``` management: endpoints: web: discovery: enabled: false ``` When a custom management context path is configured, the “discovery page” automatically moves from `/actuator` to the root of the management context. For example, if the management context path is `/management`, the discovery page is available from `/management`. When the management context path is set to `/`, the discovery page is disabled to prevent the possibility of a clash with other mappings. ### 2.6. CORS Support [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) (CORS) is a [W3C specification](https://www.w3.org/TR/cors/) that lets you specify in a flexible way what kind of cross-domain requests are authorized. If you use Spring MVC or Spring WebFlux, you can configure Actuator’s web endpoints to support such scenarios. CORS support is disabled by default and is only enabled once you have set the `management.endpoints.web.cors.allowed-origins` property. The following configuration permits `GET` and `POST` calls from the `example.com` domain: Properties ``` management.endpoints.web.cors.allowed-origins=https://example.com management.endpoints.web.cors.allowed-methods=GET,POST ``` Yaml ``` management: endpoints: web: cors: allowed-origins: "https://example.com" allowed-methods: "GET,POST" ``` | |See [`CorsEndpointProperties`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator-autoconfigure/src/main/java/org/springframework/boot/actuate/autoconfigure/endpoint/web/CorsEndpointProperties.java) for a complete list of options.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ### 2.7. Implementing Custom Endpoints If you add a `@Bean` annotated with `@Endpoint`, any methods annotated with `@ReadOperation`, `@WriteOperation`, or `@DeleteOperation` are automatically exposed over JMX and, in a web application, over HTTP as well. Endpoints can be exposed over HTTP by using Jersey, Spring MVC, or Spring WebFlux. If both Jersey and Spring MVC are available, Spring MVC is used. The following example exposes a read operation that returns a custom object: ``` @ReadOperation public CustomData getData() { return new CustomData("test", 5); } ``` You can also write technology-specific endpoints by using `@JmxEndpoint` or `@WebEndpoint`. These endpoints are restricted to their respective technologies. For example, `@WebEndpoint` is exposed only over HTTP and not over JMX. You can write technology-specific extensions by using `@EndpointWebExtension` and `@EndpointJmxExtension`. These annotations let you provide technology-specific operations to augment an existing endpoint. Finally, if you need access to web-framework-specific functionality, you can implement servlet or Spring `@Controller` and `@RestController` endpoints at the cost of them not being available over JMX or when using a different web framework. #### 2.7.1. Receiving Input Operations on an endpoint receive input through their parameters. When exposed over the web, the values for these parameters are taken from the URL’s query parameters and from the JSON request body. When exposed over JMX, the parameters are mapped to the parameters of the MBean’s operations. Parameters are required by default. They can be made optional by annotating them with either `@javax.annotation.Nullable` or `@org.springframework.lang.Nullable`. You can map each root property in the JSON request body to a parameter of the endpoint. Consider the following JSON request body: ``` { "name": "test", "counter": 42 } ``` You can use this to invoke a write operation that takes `String name` and `int counter` parameters, as the following example shows: ``` @WriteOperation public void updateData(String name, int counter) { // injects "test" and 42 } ``` | |Because endpoints are technology agnostic, only simple types can be specified in the method signature.
In particular, declaring a single parameter with a `CustomData` type that defines a `name` and `counter` properties is not supported.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | |To let the input be mapped to the operation method’s parameters, Java code that implements an endpoint should be compiled with `-parameters`, and Kotlin code that implements an endpoint should be compiled with `-java-parameters`.
This will happen automatically if you use Spring Boot’s Gradle plugin or if you use Maven and `spring-boot-starter-parent`.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ##### Input Type Conversion The parameters passed to endpoint operation methods are, if necessary, automatically converted to the required type. Before calling an operation method, the input received over JMX or HTTP is converted to the required types by using an instance of `ApplicationConversionService` as well as any `Converter` or `GenericConverter` beans qualified with `@EndpointConverter`. #### 2.7.2. Custom Web Endpoints Operations on an `@Endpoint`, `@WebEndpoint`, or `@EndpointWebExtension` are automatically exposed over HTTP using Jersey, Spring MVC, or Spring WebFlux. If both Jersey and Spring MVC are available, Spring MVC is used. ##### Web Endpoint Request Predicates A request predicate is automatically generated for each operation on a web-exposed endpoint. ##### Path The path of the predicate is determined by the ID of the endpoint and the base path of the web-exposed endpoints. The default base path is `/actuator`. For example, an endpoint with an ID of `sessions` uses `/actuator/sessions` as its path in the predicate. You can further customize the path by annotating one or more parameters of the operation method with `@Selector`. Such a parameter is added to the path predicate as a path variable. The variable’s value is passed into the operation method when the endpoint operation is invoked. If you want to capture all remaining path elements, you can add `@Selector(Match=ALL_REMAINING)` to the last parameter and make it a type that is conversion-compatible with a `String[]`. ##### HTTP method The HTTP method of the predicate is determined by the operation type, as shown in the following table: | Operation |HTTP method| |------------------|-----------| | `@ReadOperation` | `GET` | |`@WriteOperation` | `POST` | |`@DeleteOperation`| `DELETE` | ##### Consumes For a `@WriteOperation` (HTTP `POST`) that uses the request body, the `consumes` clause of the predicate is `application/vnd.spring-boot.actuator.v2+json, application/json`. For all other operations, the `consumes` clause is empty. ##### Produces The `produces` clause of the predicate can be determined by the `produces` attribute of the `@DeleteOperation`, `@ReadOperation`, and `@WriteOperation` annotations. The attribute is optional. If it is not used, the `produces` clause is determined automatically. If the operation method returns `void` or `Void`, the `produces` clause is empty. If the operation method returns a `org.springframework.core.io.Resource`, the `produces` clause is `application/octet-stream`. For all other operations, the `produces` clause is `application/vnd.spring-boot.actuator.v2+json, application/json`. ##### Web Endpoint Response Status The default response status for an endpoint operation depends on the operation type (read, write, or delete) and what, if anything, the operation returns. If a `@ReadOperation` returns a value, the response status will be 200 (OK). If it does not return a value, the response status will be 404 (Not Found). If a `@WriteOperation` or `@DeleteOperation` returns a value, the response status will be 200 (OK). If it does not return a value, the response status will be 204 (No Content). If an operation is invoked without a required parameter or with a parameter that cannot be converted to the required type, the operation method is not called, and the response status will be 400 (Bad Request). ##### Web Endpoint Range Requests You can use an HTTP range request to request part of an HTTP resource. When using Spring MVC or Spring Web Flux, operations that return a `org.springframework.core.io.Resource` automatically support range requests. | |Range requests are not supported when using Jersey.| |---|---------------------------------------------------| ##### Web Endpoint Security An operation on a web endpoint or a web-specific endpoint extension can receive the current `java.security.Principal` or `org.springframework.boot.actuate.endpoint.SecurityContext` as a method parameter. The former is typically used in conjunction with `@Nullable` to provide different behavior for authenticated and unauthenticated users. The latter is typically used to perform authorization checks by using its `isUserInRole(String)` method. #### 2.7.3. Servlet Endpoints A servlet can be exposed as an endpoint by implementing a class annotated with `@ServletEndpoint` that also implements `Supplier`. Servlet endpoints provide deeper integration with the servlet container but at the expense of portability. They are intended to be used to expose an existing servlet as an endpoint. For new endpoints, the `@Endpoint` and `@WebEndpoint` annotations should be preferred whenever possible. #### 2.7.4. Controller Endpoints You can use `@ControllerEndpoint` and `@RestControllerEndpoint` to implement an endpoint that is exposed only by Spring MVC or Spring WebFlux. Methods are mapped by using the standard annotations for Spring MVC and Spring WebFlux, such as `@RequestMapping` and `@GetMapping`, with the endpoint’s ID being used as a prefix for the path. Controller endpoints provide deeper integration with Spring’s web frameworks but at the expense of portability. The `@Endpoint` and `@WebEndpoint` annotations should be preferred whenever possible. ### 2.8. Health Information You can use health information to check the status of your running application. It is often used by monitoring software to alert someone when a production system goes down. The information exposed by the `health` endpoint depends on the `management.endpoint.health.show-details` and `management.endpoint.health.show-components` properties, which can be configured with one of the following values: | Name | Description | |-----------------|-------------------------------------------------------------------------------------------------------------------------------| | `never` | Details are never shown. | |`when-authorized`|Details are shown only to authorized users.
Authorized roles can be configured by using `management.endpoint.health.roles`.| | `always` | Details are shown to all users. | The default value is `never`. A user is considered to be authorized when they are in one or more of the endpoint’s roles. If the endpoint has no configured roles (the default), all authenticated users are considered to be authorized. You can configure the roles by using the `management.endpoint.health.roles` property. | |If you have secured your application and wish to use `always`, your security configuration must permit access to the health endpoint for both authenticated and unauthenticated users.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Health information is collected from the content of a [`HealthContributorRegistry`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/HealthContributorRegistry.java) (by default, all [`HealthContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/HealthContributor.java) instances defined in your `ApplicationContext`). Spring Boot includes a number of auto-configured `HealthContributors`, and you can also write your own. A `HealthContributor` can be either a `HealthIndicator` or a `CompositeHealthContributor`. A `HealthIndicator` provides actual health information, including a `Status`. A `CompositeHealthContributor` provides a composite of other `HealthContributors`. Taken together, contributors form a tree structure to represent the overall system health. By default, the final system health is derived by a `StatusAggregator`, which sorts the statuses from each `HealthIndicator` based on an ordered list of statuses. The first status in the sorted list is used as the overall health status. If no `HealthIndicator` returns a status that is known to the `StatusAggregator`, an `UNKNOWN` status is used. | |You can use the `HealthContributorRegistry` to register and unregister health indicators at runtime.| |---|----------------------------------------------------------------------------------------------------| #### 2.8.1. Auto-configured HealthIndicators When appropriate, Spring Boot auto-configures the `HealthIndicators` listed in the following table. You can also enable or disable selected indicators by configuring `management.health.key.enabled`, with the `key` listed in the following table: | Key | Name | Description | |---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| | `cassandra` | [`CassandraDriverHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/cassandra/CassandraDriverHealthIndicator.java) | Checks that a Cassandra database is up. | | `couchbase` | [`CouchbaseHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/couchbase/CouchbaseHealthIndicator.java) | Checks that a Couchbase cluster is up. | | `db` | [`DataSourceHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/jdbc/DataSourceHealthIndicator.java) |Checks that a connection to `DataSource` can be obtained.| | `diskspace` | [`DiskSpaceHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/system/DiskSpaceHealthIndicator.java) | Checks for low disk space. | |`elasticsearch`|[`ElasticsearchRestHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/elasticsearch/ElasticsearchRestHealthIndicator.java)| Checks that an Elasticsearch cluster is up. | | `hazelcast` | [`HazelcastHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/hazelcast/HazelcastHealthIndicator.java) | Checks that a Hazelcast server is up. | | `influxdb` | [`InfluxDbHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/influx/InfluxDbHealthIndicator.java) | Checks that an InfluxDB server is up. | | `jms` | [`JmsHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/jms/JmsHealthIndicator.java) | Checks that a JMS broker is up. | | `ldap` | [`LdapHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/ldap/LdapHealthIndicator.java) | Checks that an LDAP server is up. | | `mail` | [`MailHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/mail/MailHealthIndicator.java) | Checks that a mail server is up. | | `mongo` | [`MongoHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/mongo/MongoHealthIndicator.java) | Checks that a Mongo database is up. | | `neo4j` | [`Neo4jHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/neo4j/Neo4jHealthIndicator.java) | Checks that a Neo4j database is up. | | `ping` | [`PingHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/PingHealthIndicator.java) | Always responds with `UP`. | | `rabbit` | [`RabbitHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/amqp/RabbitHealthIndicator.java) | Checks that a Rabbit server is up. | | `redis` | [`RedisHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/redis/RedisHealthIndicator.java) | Checks that a Redis server is up. | | `solr` | [`SolrHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/solr/SolrHealthIndicator.java) | Checks that a Solr server is up. | | |You can disable them all by setting the `management.health.defaults.enabled` property.| |---|--------------------------------------------------------------------------------------| Additional `HealthIndicators` are available but are not enabled by default: | Key | Name | Description | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------| |`livenessstate` | [`LivenessStateHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/availability/LivenessStateHealthIndicator.java) |Exposes the “Liveness” application availability state. | |`readinessstate`|[`ReadinessStateHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/availability/ReadinessStateHealthIndicator.java)|Exposes the “Readiness” application availability state.| #### 2.8.2. Writing Custom HealthIndicators To provide custom health information, you can register Spring beans that implement the [`HealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/HealthIndicator.java) interface. You need to provide an implementation of the `health()` method and return a `Health` response. The `Health` response should include a status and can optionally include additional details to be displayed. The following code shows a sample `HealthIndicator` implementation: ``` import org.springframework.boot.actuate.health.Health; import org.springframework.boot.actuate.health.HealthIndicator; import org.springframework.stereotype.Component; @Component public class MyHealthIndicator implements HealthIndicator { @Override public Health health() { int errorCode = check(); if (errorCode != 0) { return Health.down().withDetail("Error Code", errorCode).build(); } return Health.up().build(); } private int check() { // perform some specific health check return ... } } ``` | |The identifier for a given `HealthIndicator` is the name of the bean without the `HealthIndicator` suffix, if it exists.
In the preceding example, the health information is available in an entry named `my`.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| In addition to Spring Boot’s predefined [`Status`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/Status.java) types, `Health` can return a custom `Status` that represents a new system state. In such cases, you also need to provide a custom implementation of the [`StatusAggregator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/StatusAggregator.java) interface, or you must configure the default implementation by using the `management.endpoint.health.status.order` configuration property. For example, assume a new `Status` with a code of `FATAL` is being used in one of your `HealthIndicator` implementations. To configure the severity order, add the following property to your application properties: Properties ``` management.endpoint.health.status.order=fatal,down,out-of-service,unknown,up ``` Yaml ``` management: endpoint: health: status: order: "fatal,down,out-of-service,unknown,up" ``` The HTTP status code in the response reflects the overall health status. By default, `OUT_OF_SERVICE` and `DOWN` map to 503. Any unmapped health statuses, including `UP`, map to 200. You might also want to register custom status mappings if you access the health endpoint over HTTP. Configuring a custom mapping disables the defaults mappings for `DOWN` and `OUT_OF_SERVICE`. If you want to retain the default mappings, you must explicitly configure them, alongside any custom mappings. For example, the following property maps `FATAL` to 503 (service unavailable) and retains the default mappings for `DOWN` and `OUT_OF_SERVICE`: Properties ``` management.endpoint.health.status.http-mapping.down=503 management.endpoint.health.status.http-mapping.fatal=503 management.endpoint.health.status.http-mapping.out-of-service=503 ``` Yaml ``` management: endpoint: health: status: http-mapping: down: 503 fatal: 503 out-of-service: 503 ``` | |If you need more control, you can define your own `HttpCodeStatusMapper` bean.| |---|------------------------------------------------------------------------------| The following table shows the default status mappings for the built-in statuses: | Status | Mapping | |----------------|----------------------------------------------| | `DOWN` | `SERVICE_UNAVAILABLE` (`503`) | |`OUT_OF_SERVICE`| `SERVICE_UNAVAILABLE` (`503`) | | `UP` |No mapping by default, so HTTP status is `200`| | `UNKNOWN` |No mapping by default, so HTTP status is `200`| #### 2.8.3. Reactive Health Indicators For reactive applications, such as those that use Spring WebFlux, `ReactiveHealthContributor` provides a non-blocking contract for getting application health. Similar to a traditional `HealthContributor`, health information is collected from the content of a [`ReactiveHealthContributorRegistry`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/ReactiveHealthContributorRegistry.java) (by default, all [`HealthContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/HealthContributor.java) and [`ReactiveHealthContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/ReactiveHealthContributor.java) instances defined in your `ApplicationContext`). Regular `HealthContributors` that do not check against a reactive API are executed on the elastic scheduler. | |In a reactive application, you should use the `ReactiveHealthContributorRegistry` to register and unregister health indicators at runtime.
If you need to register a regular `HealthContributor`, you should wrap it with `ReactiveHealthContributor#adapt`.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| To provide custom health information from a reactive API, you can register Spring beans that implement the [`ReactiveHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/health/ReactiveHealthIndicator.java) interface. The following code shows a sample `ReactiveHealthIndicator` implementation: ``` import reactor.core.publisher.Mono; import org.springframework.boot.actuate.health.Health; import org.springframework.boot.actuate.health.ReactiveHealthIndicator; import org.springframework.stereotype.Component; @Component public class MyReactiveHealthIndicator implements ReactiveHealthIndicator { @Override public Mono health() { return doHealthCheck().onErrorResume((exception) -> Mono.just(new Health.Builder().down(exception).build())); } private Mono doHealthCheck() { // perform some specific health check return ... } } ``` | |To handle the error automatically, consider extending from `AbstractReactiveHealthIndicator`.| |---|---------------------------------------------------------------------------------------------| #### 2.8.4. Auto-configured ReactiveHealthIndicators #### When appropriate, Spring Boot auto-configures the following `ReactiveHealthIndicators`: | Key | Name | Description | |---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------| | `cassandra` |[`CassandraDriverReactiveHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/cassandra/CassandraDriverReactiveHealthIndicator.java)| Checks that a Cassandra database is up. | | `couchbase` | [`CouchbaseReactiveHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/couchbase/CouchbaseReactiveHealthIndicator.java) | Checks that a Couchbase cluster is up. | |`elasticsearch`|[`ElasticsearchReactiveHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/elasticsearch/ElasticsearchReactiveHealthIndicator.java)|Checks that an Elasticsearch cluster is up.| | `mongo` | [`MongoReactiveHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/mongo/MongoReactiveHealthIndicator.java) | Checks that a Mongo database is up. | | `neo4j` | [`Neo4jReactiveHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/neo4j/Neo4jReactiveHealthIndicator.java) | Checks that a Neo4j database is up. | | `redis` | [`RedisReactiveHealthIndicator`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/redis/RedisReactiveHealthIndicator.java) | Checks that a Redis server is up. | | |If necessary, reactive indicators replace the regular ones.
Also, any `HealthIndicator` that is not handled explicitly is wrapped automatically.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------| #### 2.8.5. Health Groups It is sometimes useful to organize health indicators into groups that you can use for different purposes. To create a health indicator group, you can use the `management.endpoint.health.group.` property and specify a list of health indicator IDs to `include` or `exclude`. For example, to create a group that includes only database indicators you can define the following: Properties ``` management.endpoint.health.group.custom.include=db ``` Yaml ``` management: endpoint: health: group: custom: include: "db" ``` You can then check the result by hitting `[localhost:8080/actuator/health/custom](http://localhost:8080/actuator/health/custom)`. Similarly, to create a group that excludes the database indicators from the group and includes all the other indicators, you can define the following: Properties ``` management.endpoint.health.group.custom.exclude=db ``` Yaml ``` management: endpoint: health: group: custom: exclude: "db" ``` By default, groups inherit the same `StatusAggregator` and `HttpCodeStatusMapper` settings as the system health. However, you can also define these on a per-group basis. You can also override the `show-details` and `roles` properties if required: Properties ``` management.endpoint.health.group.custom.show-details=when-authorized management.endpoint.health.group.custom.roles=admin management.endpoint.health.group.custom.status.order=fatal,up management.endpoint.health.group.custom.status.http-mapping.fatal=500 management.endpoint.health.group.custom.status.http-mapping.out-of-service=500 ``` Yaml ``` management: endpoint: health: group: custom: show-details: "when-authorized" roles: "admin" status: order: "fatal,up" http-mapping: fatal: 500 out-of-service: 500 ``` | |You can use `@Qualifier("groupname")` if you need to register custom `StatusAggregator` or `HttpCodeStatusMapper` beans for use with the group.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------| A health group can also include/exclude a `CompositeHealthContributor`. You can also include/exclude only a certain component of a `CompositeHealthContributor`. This can be done using the fully qualified name of the component as follows: ``` management.endpoint.health.group.custom.include="test/primary" management.endpoint.health.group.custom.exclude="test/primary/b" ``` In the example above, the `custom` group will include the `HealthContributor` with the name `primary` which is a component of the composite `test`. Here, `primary` itself is a composite and the `HealthContributor` with the name `b` will be excluded from the `custom` group. Health groups can be made available at an additional path on either the main or management port. This is useful in cloud environments such as Kubernetes, where it is quite common to use a separate management port for the actuator endpoints for security purposes. Having a separate port could lead to unreliable health checks because the main application might not work properly even if the health check is successful. The health group can be configured with an additional path as follows: ``` management.endpoint.health.group.live.additional-path="server:/healthz" ``` This would make the `live` health group available on the main server port at `/healthz`. The prefix is mandatory and must be either `server:` (represents the main server port) or `management:` (represents the management port, if configured.) The path must be a single path segment. #### 2.8.6. DataSource Health The `DataSource` health indicator shows the health of both standard data sources and routing data source beans. The health of a routing data source includes the health of each of its target data sources. In the health endpoint’s response, each of a routing data source’s targets is named by using its routing key. If you prefer not to include routing data sources in the indicator’s output, set `management.health.db.ignore-routing-data-sources` to `true`. ### 2.9. Kubernetes Probes Applications deployed on Kubernetes can provide information about their internal state with [Container Probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). Depending on [your Kubernetes configuration](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/), the kubelet calls those probes and reacts to the result. By default, Spring Boot manages your [Application Availability State](features.html#features.spring-application.application-availability). If deployed in a Kubernetes environment, actuator gathers the “Liveness” and “Readiness” information from the `ApplicationAvailability` interface and uses that information in dedicated [health indicators](#actuator.endpoints.health.auto-configured-health-indicators): `LivenessStateHealthIndicator` and `ReadinessStateHealthIndicator`. These indicators are shown on the global health endpoint (`"/actuator/health"`). They are also exposed as separate HTTP Probes by using [health groups](#actuator.endpoints.health.groups): `"/actuator/health/liveness"` and `"/actuator/health/readiness"`. You can then configure your Kubernetes infrastructure with the following endpoint information: ``` livenessProbe: httpGet: path: "/actuator/health/liveness" port: failureThreshold: ... periodSeconds: ... readinessProbe: httpGet: path: "/actuator/health/readiness" port: failureThreshold: ... periodSeconds: ... ``` | |`` should be set to the port that the actuator endpoints are available on.
It could be the main web server port or a separate management port if the `"management.server.port"` property has been set.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| These health groups are automatically enabled only if the application [runs in a Kubernetes environment](deployment.html#deployment.cloud.kubernetes). You can enable them in any environment by using the `management.endpoint.health.probes.enabled` configuration property. | |If an application takes longer to start than the configured liveness period, Kubernetes mentions the `"startupProbe"` as a possible solution.
The `"startupProbe"` is not necessarily needed here, as the `"readinessProbe"` fails until all startup tasks are done. See the section that describes [how probes behave during the application lifecycle](#actuator.endpoints.kubernetes-probes.lifecycle).| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| If your Actuator endpoints are deployed on a separate management context, the endpoints do not use the same web infrastructure (port, connection pools, framework components) as the main application. In this case, a probe check could be successful even if the main application does not work properly (for example, it cannot accept new connections). For this reason, is it a good idea to make the `liveness` and `readiness` health groups available on the main server port. This can be done by setting the following property: ``` management.endpoint.health.probes.add-additional-paths=true ``` This would make `liveness` available at `/livez` and `readiness` at `readyz` on the main server port. #### 2.9.1. Checking External State with Kubernetes Probes #### Actuator configures the “liveness” and “readiness” probes as Health Groups. This means that all the [health groups features](#actuator.endpoints.health.groups) are available for them. You can, for example, configure additional Health Indicators: Properties ``` management.endpoint.health.group.readiness.include=readinessState,customCheck ``` Yaml ``` management: endpoint: health: group: readiness: include: "readinessState,customCheck" ``` By default, Spring Boot does not add other health indicators to these groups. The “liveness” probe should not depend on health checks for external systems. If the [liveness state of an application](features.html#features.spring-application.application-availability.liveness) is broken, Kubernetes tries to solve that problem by restarting the application instance. This means that if an external system (such as a database, a Web API, or an external cache) fails, Kubernetes might restart all application instances and create cascading failures. As for the “readiness” probe, the choice of checking external systems must be made carefully by the application developers. For this reason, Spring Boot does not include any additional health checks in the readiness probe. If the [readiness state of an application instance](features.html#features.spring-application.application-availability.readiness) is unready, Kubernetes does not route traffic to that instance. Some external systems might not be shared by application instances, in which case they could be included in a readiness probe. Other external systems might not be essential to the application (the application could have circuit breakers and fallbacks), in which case they definitely should not be included. Unfortunately, an external system that is shared by all application instances is common, and you have to make a judgement call: Include it in the readiness probe and expect that the application is taken out of service when the external service is down or leave it out and deal with failures higher up the stack, perhaps by using a circuit breaker in the caller. | |If all instances of an application are unready, a Kubernetes Service with `type=ClusterIP` or `NodePort` does not accept any incoming connections.
There is no HTTP error response (503 and so on), since there is no connection.
A service with `type=LoadBalancer` might or might not accept connections, depending on the provider.
A service that has an explicit [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) also responds in a way that depends on the implementation — the ingress service itself has to decide how to handle the “connection refused” from downstream.
HTTP 503 is quite likely in the case of both load balancer and ingress.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Also, if an application uses Kubernetes [autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), it may react differently to applications being taken out of the load-balancer, depending on its autoscaler configuration. #### 2.9.2. Application Lifecycle and Probe States An important aspect of the Kubernetes Probes support is its consistency with the application lifecycle. There is a significant difference between the `AvailabilityState` (which is the in-memory, internal state of the application) and the actual probe (which exposes that state). Depending on the phase of application lifecycle, the probe might not be available. Spring Boot publishes [application events during startup and shutdown](features.html#features.spring-application.application-events-and-listeners), and probes can listen to such events and expose the `AvailabilityState` information. The following tables show the `AvailabilityState` and the state of HTTP connectors at different stages. When a Spring Boot application starts: |Startup phase|LivenessState| ReadinessState | HTTP server | Notes | |-------------|-------------|-------------------|----------------|--------------------------------------------------------------------------------------------------------------| | Starting | `BROKEN` |`REFUSING_TRAFFIC` | Not started | Kubernetes checks the "liveness" Probe and restarts the application if it takes too long. | | Started | `CORRECT` |`REFUSING_TRAFFIC` |Refuses requests|The application context is refreshed. The application performs startup tasks and does not receive traffic yet.| | Ready | `CORRECT` |`ACCEPTING_TRAFFIC`|Accepts requests| Startup tasks are finished. The application is receiving traffic. | When a Spring Boot application shuts down: | Shutdown phase |Liveness State| Readiness State | HTTP server | Notes | |-----------------|--------------|-------------------|-------------------------|---------------------------------------------------------------------------------------------| | Running | `CORRECT` |`ACCEPTING_TRAFFIC`| Accepts requests | Shutdown has been requested. | |Graceful shutdown| `CORRECT` |`REFUSING_TRAFFIC` |New requests are rejected|If enabled, [graceful shutdown processes in-flight requests](web.html#web.graceful-shutdown).| |Shutdown complete| N/A | N/A | Server is shut down | The application context is closed and the application is shut down. | | |See [Kubernetes container lifecycle section](deployment.html#deployment.cloud.kubernetes.container-lifecycle) for more information about Kubernetes deployment.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------| ### 2.10. Application Information Application information exposes various information collected from all [`InfoContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/info/InfoContributor.java) beans defined in your `ApplicationContext`. Spring Boot includes a number of auto-configured `InfoContributor` beans, and you can write your own. #### 2.10.1. Auto-configured InfoContributors When appropriate, Spring auto-configures the following `InfoContributor` beans: | ID | Name | Description | Prerequisites | |-------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------|--------------------------------------------| |`build`| [`BuildInfoContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/info/BuildInfoContributor.java) | Exposes build information. |A `META-INF/build-info.properties` resource.| | `env` |[`EnvironmentInfoContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/info/EnvironmentInfoContributor.java)|Exposes any property from the `Environment` whose name starts with `info.`.| None. | | `git` | [`GitInfoContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/info/GitInfoContributor.java) | Exposes git information. | A `git.properties` resource. | |`java` | [`JavaInfoContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/info/JavaInfoContributor.java) | Exposes Java runtime information. | None. | Whether or not an individual contributor is enabled is controlled by its `management.info..enabled` property. Different contributors have different defaults for this property, depending on their prerequisites and the nature of the information that they expose. With no prerequisites to indicate that they should be enabled, the `env` and `java` contributors are disabled by default. You can enable them by setting the `management.info.env.enabled` or `management.info.java.enabled` properties to `true`. The `build` and `git` info contributors are enabled by default. Each can be disabled by setting its `management.info..enabled` property to `false`. Alternatively, to disable every contributor that is usually enabled by default, set the `management.info.defaults.enabled` property to `false`. #### 2.10.2. Custom Application Information When the `env` contributor is enabled, you can customize the data exposed by the `info` endpoint by setting `info.*` Spring properties. All `Environment` properties under the `info` key are automatically exposed. For example, you could add the following settings to your `application.properties` file: Properties ``` info.app.encoding=UTF-8 info.app.java.source=11 info.app.java.target=11 ``` Yaml ``` info: app: encoding: "UTF-8" java: source: "11" target: "11" ``` | |Rather than hardcoding those values, you could also [expand info properties at build time](howto.html#howto.properties-and-configuration.expand-properties).

Assuming you use Maven, you could rewrite the preceding example as follows:

Properties

```
[email protected]@
[email protected]@
[email protected]@
```

Yaml

```
info:
app:
encoding: "@[email protected]"
java:
source: "@[email protected]"
target: "@[email protected]"
```| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 2.10.3. Git Commit Information Another useful feature of the `info` endpoint is its ability to publish information about the state of your `git` source code repository when the project was built. If a `GitProperties` bean is available, you can use the `info` endpoint to expose these properties. | |A `GitProperties` bean is auto-configured if a `git.properties` file is available at the root of the classpath.
See "[how to generate git information](howto.html#howto.build.generate-git-info)" for more detail.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| By default, the endpoint exposes `git.branch`, `git.commit.id`, and `git.commit.time` properties, if present. If you do not want any of these properties in the endpoint response, they need to be excluded from the `git.properties` file. If you want to display the full git information (that is, the full content of `git.properties`), use the `management.info.git.mode` property, as follows: Properties ``` management.info.git.mode=full ``` Yaml ``` management: info: git: mode: "full" ``` To disable the git commit information from the `info` endpoint completely, set the `management.info.git.enabled` property to `false`, as follows: Properties ``` management.info.git.enabled=false ``` Yaml ``` management: info: git: enabled: false ``` #### 2.10.4. Build Information If a `BuildProperties` bean is available, the `info` endpoint can also publish information about your build. This happens if a `META-INF/build-info.properties` file is available in the classpath. | |The Maven and Gradle plugins can both generate that file.
See "[how to generate build information](howto.html#howto.build.generate-info)" for more details.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 2.10.5. Java Information The `info` endpoint publishes information about your Java runtime environment, see [`JavaInfo`](https://docs.spring.io/spring-boot/docs/2.6.4/api/org/springframework/boot/info/JavaInfo.html) for more details. #### 2.10.6. Writing Custom InfoContributors To provide custom application information, you can register Spring beans that implement the [`InfoContributor`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/info/InfoContributor.java) interface. The following example contributes an `example` entry with a single value: ``` import java.util.Collections; import org.springframework.boot.actuate.info.Info; import org.springframework.boot.actuate.info.InfoContributor; import org.springframework.stereotype.Component; @Component public class MyInfoContributor implements InfoContributor { @Override public void contribute(Info.Builder builder) { builder.withDetail("example", Collections.singletonMap("key", "value")); } } ``` If you reach the `info` endpoint, you should see a response that contains the following additional entry: ``` { "example": { "key" : "value" } } ``` ## 3. Monitoring and Management over HTTP If you are developing a web application, Spring Boot Actuator auto-configures all enabled endpoints to be exposed over HTTP. The default convention is to use the `id` of the endpoint with a prefix of `/actuator` as the URL path. For example, `health` is exposed as `/actuator/health`. | |Actuator is supported natively with Spring MVC, Spring WebFlux, and Jersey.
If both Jersey and Spring MVC are available, Spring MVC is used.| |---|------------------------------------------------------------------------------------------------------------------------------------------------| | |Jackson is a required dependency in order to get the correct JSON responses as documented in the API documentation ([HTML](https://docs.spring.io/spring-boot/docs/2.6.4/actuator-api/htmlsingle) or [PDF](https://docs.spring.io/spring-boot/docs/2.6.4/actuator-api/pdf/spring-boot-actuator-web-api.pdf)).| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ### 3.1. Customizing the Management Endpoint Paths ### Sometimes, it is useful to customize the prefix for the management endpoints. For example, your application might already use `/actuator` for another purpose. You can use the `management.endpoints.web.base-path` property to change the prefix for your management endpoint, as the following example shows: Properties ``` management.endpoints.web.base-path=/manage ``` Yaml ``` management: endpoints: web: base-path: "/manage" ``` The preceding `application.properties` example changes the endpoint from `/actuator/{id}` to `/manage/{id}` (for example, `/manage/info`). | |Unless the management port has been configured to [expose endpoints by using a different HTTP port](#actuator.monitoring.customizing-management-server-port), `management.endpoints.web.base-path` is relative to `server.servlet.context-path` (for servlet web applications) or `spring.webflux.base-path` (for reactive web applications).
If `management.server.port` is configured, `management.endpoints.web.base-path` is relative to `management.server.base-path`.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| If you want to map endpoints to a different path, you can use the `management.endpoints.web.path-mapping` property. The following example remaps `/actuator/health` to `/healthcheck`: Properties ``` management.endpoints.web.base-path=/ management.endpoints.web.path-mapping.health=healthcheck ``` Yaml ``` management: endpoints: web: base-path: "/" path-mapping: health: "healthcheck" ``` ### 3.2. Customizing the Management Server Port Exposing management endpoints by using the default HTTP port is a sensible choice for cloud-based deployments. If, however, your application runs inside your own data center, you may prefer to expose endpoints by using a different HTTP port. You can set the `management.server.port` property to change the HTTP port, as the following example shows: Properties ``` management.server.port=8081 ``` Yaml ``` management: server: port: 8081 ``` | |On Cloud Foundry, by default, applications receive requests only on port 8080 for both HTTP and TCP routing.
If you want to use a custom management port on Cloud Foundry, you need to explicitly set up the application’s routes to forward traffic to the custom port.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ### 3.3. Configuring Management-specific SSL When configured to use a custom port, you can also configure the management server with its own SSL by using the various `management.server.ssl.*` properties. For example, doing so lets a management server be available over HTTP while the main application uses HTTPS, as the following property settings show: Properties ``` server.port=8443 server.ssl.enabled=true server.ssl.key-store=classpath:store.jks server.ssl.key-password=secret management.server.port=8080 management.server.ssl.enabled=false ``` Yaml ``` server: port: 8443 ssl: enabled: true key-store: "classpath:store.jks" key-password: "secret" management: server: port: 8080 ssl: enabled: false ``` Alternatively, both the main server and the management server can use SSL but with different key stores, as follows: Properties ``` server.port=8443 server.ssl.enabled=true server.ssl.key-store=classpath:main.jks server.ssl.key-password=secret management.server.port=8080 management.server.ssl.enabled=true management.server.ssl.key-store=classpath:management.jks management.server.ssl.key-password=secret ``` Yaml ``` server: port: 8443 ssl: enabled: true key-store: "classpath:main.jks" key-password: "secret" management: server: port: 8080 ssl: enabled: true key-store: "classpath:management.jks" key-password: "secret" ``` ### 3.4. Customizing the Management Server Address You can customize the address on which the management endpoints are available by setting the `management.server.address` property. Doing so can be useful if you want to listen only on an internal or ops-facing network or to listen only for connections from `localhost`. | |You can listen on a different address only when the port differs from the main server port.| |---|-------------------------------------------------------------------------------------------| The following example `application.properties` does not allow remote management connections: Properties ``` management.server.port=8081 management.server.address=127.0.0.1 ``` Yaml ``` management: server: port: 8081 address: "127.0.0.1" ``` ### 3.5. Disabling HTTP Endpoints If you do not want to expose endpoints over HTTP, you can set the management port to `-1`, as the following example shows: Properties ``` management.server.port=-1 ``` Yaml ``` management: server: port: -1 ``` You can also achieve this by using the `management.endpoints.web.exposure.exclude` property, as the following example shows: Properties ``` management.endpoints.web.exposure.exclude=* ``` Yaml ``` management: endpoints: web: exposure: exclude: "*" ``` ## 4. Monitoring and Management over JMX Java Management Extensions (JMX) provide a standard mechanism to monitor and manage applications. By default, this feature is not enabled. You can turn it on by setting the `spring.jmx.enabled` configuration property to `true`. Spring Boot exposes the most suitable `MBeanServer` as a bean with an ID of `mbeanServer`. Any of your beans that are annotated with Spring JMX annotations (`@ManagedResource`, `@ManagedAttribute`, or `@ManagedOperation`) are exposed to it. If your platform provides a standard `MBeanServer`, Spring Boot uses that and defaults to the VM `MBeanServer`, if necessary. If all that fails, a new `MBeanServer` is created. See the [`JmxAutoConfiguration`](https://github.com/spring-projects/spring-boot/tree/v2.6.4/spring-boot-project/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/jmx/JmxAutoConfiguration.java) class for more details. By default, Spring Boot also exposes management endpoints as JMX MBeans under the `org.springframework.boot` domain. To take full control over endpoint registration in the JMX domain, consider registering your own `EndpointObjectNameFactory` implementation. ### 4.1. Customizing MBean Names The name of the MBean is usually generated from the `id` of the endpoint. For example, the `health` endpoint is exposed as `org.springframework.boot:type=Endpoint,name=Health`. If your application contains more than one Spring `ApplicationContext`, you may find that names clash. To solve this problem, you can set the `spring.jmx.unique-names` property to `true` so that MBean names are always unique. You can also customize the JMX domain under which endpoints are exposed. The following settings show an example of doing so in `application.properties`: Properties ``` spring.jmx.unique-names=true management.endpoints.jmx.domain=com.example.myapp ``` Yaml ``` spring: jmx: unique-names: true management: endpoints: jmx: domain: "com.example.myapp" ``` ### 4.2. Disabling JMX Endpoints If you do not want to expose endpoints over JMX, you can set the `management.endpoints.jmx.exposure.exclude` property to `*`, as the following example shows: Properties ``` management.endpoints.jmx.exposure.exclude=* ``` Yaml ``` management: endpoints: jmx: exposure: exclude: "*" ``` ### 4.3. Using Jolokia for JMX over HTTP Jolokia is a JMX-HTTP bridge that provides an alternative method of accessing JMX beans. To use Jolokia, include a dependency to `org.jolokia:jolokia-core`. For example, with Maven, you would add the following dependency: ``` org.jolokia jolokia-core ``` You can then expose the Jolokia endpoint by adding `jolokia` or `*` to the `management.endpoints.web.exposure.include` property. You can then access it by using `/actuator/jolokia` on your management HTTP server. | |The Jolokia endpoint exposes Jolokia’s servlet as an actuator endpoint.
As a result, it is specific to servlet environments, such as Spring MVC and Jersey.
The endpoint is not available in a WebFlux application.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 4.3.1. Customizing Jolokia Jolokia has a number of settings that you would traditionally configure by setting servlet parameters. With Spring Boot, you can use your `application.properties` file. To do so, prefix the parameter with `management.endpoint.jolokia.config.`, as the following example shows: Properties ``` management.endpoint.jolokia.config.debug=true ``` Yaml ``` management: endpoint: jolokia: config: debug: true ``` #### 4.3.2. Disabling Jolokia If you use Jolokia but do not want Spring Boot to configure it, set the `management.endpoint.jolokia.enabled` property to `false`, as follows: Properties ``` management.endpoint.jolokia.enabled=false ``` Yaml ``` management: endpoint: jolokia: enabled: false ``` ## 5. Loggers Spring Boot Actuator includes the ability to view and configure the log levels of your application at runtime. You can view either the entire list or an individual logger’s configuration, which is made up of both the explicitly configured logging level as well as the effective logging level given to it by the logging framework. These levels can be one of: * `TRACE` * `DEBUG` * `INFO` * `WARN` * `ERROR` * `FATAL` * `OFF` * `null` `null` indicates that there is no explicit configuration. ### 5.1. Configure a Logger To configure a given logger, `POST` a partial entity to the resource’s URI, as the following example shows: ``` { "configuredLevel": "DEBUG" } ``` | |To “reset” the specific level of the logger (and use the default configuration instead), you can pass a value of `null` as the `configuredLevel`.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------| ## 6. Metrics Spring Boot Actuator provides dependency management and auto-configuration for [Micrometer](https://micrometer.io), an application metrics facade that supports [numerous monitoring systems](https://micrometer.io/docs), including: * [AppOptics](#actuator.metrics.export.appoptics) * [Atlas](#actuator.metrics.export.atlas) * [Datadog](#actuator.metrics.export.datadog) * [Dynatrace](#actuator.metrics.export.dynatrace) * [Elastic](#actuator.metrics.export.elastic) * [Ganglia](#actuator.metrics.export.ganglia) * [Graphite](#actuator.metrics.export.graphite) * [Humio](#actuator.metrics.export.humio) * [Influx](#actuator.metrics.export.influx) * [JMX](#actuator.metrics.export.jmx) * [KairosDB](#actuator.metrics.export.kairos) * [New Relic](#actuator.metrics.export.newrelic) * [Prometheus](#actuator.metrics.export.prometheus) * [SignalFx](#actuator.metrics.export.signalfx) * [Simple (in-memory)](#actuator.metrics.export.simple) * [Stackdriver](#actuator.metrics.export.stackdriver) * [StatsD](#actuator.metrics.export.statsd) * [Wavefront](#actuator.metrics.export.wavefront) | |To learn more about Micrometer’s capabilities, see its [reference documentation](https://micrometer.io/docs), in particular the [concepts section](https://micrometer.io/docs/concepts).| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ### 6.1. Getting started Spring Boot auto-configures a composite `MeterRegistry` and adds a registry to the composite for each of the supported implementations that it finds on the classpath. Having a dependency on `micrometer-registry-{system}` in your runtime classpath is enough for Spring Boot to configure the registry. Most registries share common features. For instance, you can disable a particular registry even if the Micrometer registry implementation is on the classpath. The following example disables Datadog: Properties ``` management.metrics.export.datadog.enabled=false ``` Yaml ``` management: metrics: export: datadog: enabled: false ``` You can also disable all registries unless stated otherwise by the registry-specific property, as the following example shows: Properties ``` management.metrics.export.defaults.enabled=false ``` Yaml ``` management: metrics: export: defaults: enabled: false ``` Spring Boot also adds any auto-configured registries to the global static composite registry on the `Metrics` class, unless you explicitly tell it not to: Properties ``` management.metrics.use-global-registry=false ``` Yaml ``` management: metrics: use-global-registry: false ``` You can register any number of `MeterRegistryCustomizer` beans to further configure the registry, such as applying common tags, before any meters are registered with the registry: ``` import io.micrometer.core.instrument.MeterRegistry; import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration(proxyBeanMethods = false) public class MyMeterRegistryConfiguration { @Bean public MeterRegistryCustomizer metricsCommonTags() { return (registry) -> registry.config().commonTags("region", "us-east-1"); } } ``` You can apply customizations to particular registry implementations by being more specific about the generic type: ``` import io.micrometer.core.instrument.Meter; import io.micrometer.core.instrument.config.NamingConvention; import io.micrometer.graphite.GraphiteMeterRegistry; import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration(proxyBeanMethods = false) public class MyMeterRegistryConfiguration { @Bean public MeterRegistryCustomizer graphiteMetricsNamingConvention() { return (registry) -> registry.config().namingConvention(this::name); } private String name(String name, Meter.Type type, String baseUnit) { return ... } } ``` Spring Boot also [configures built-in instrumentation](#actuator.metrics.supported) that you can control through configuration or dedicated annotation markers. ### 6.2. Supported Monitoring Systems This section briefly describes each of the supported monitoring systems. #### 6.2.1. AppOptics By default, the AppOptics registry periodically pushes metrics to `[api.appoptics.com/v1/measurements](https://api.appoptics.com/v1/measurements)`. To export metrics to SaaS [AppOptics](https://micrometer.io/docs/registry/appOptics), your API token must be provided: Properties ``` management.metrics.export.appoptics.api-token=YOUR_TOKEN ``` Yaml ``` management: metrics: export: appoptics: api-token: "YOUR_TOKEN" ``` #### 6.2.2. Atlas By default, metrics are exported to [Atlas](https://micrometer.io/docs/registry/atlas) running on your local machine. You can provide the location of the [Atlas server](https://github.com/Netflix/atlas): Properties ``` management.metrics.export.atlas.uri=https://atlas.example.com:7101/api/v1/publish ``` Yaml ``` management: metrics: export: atlas: uri: "https://atlas.example.com:7101/api/v1/publish" ``` #### 6.2.3. Datadog A Datadog registry periodically pushes metrics to [datadoghq](https://www.datadoghq.com). To export metrics to [Datadog](https://micrometer.io/docs/registry/datadog), you must provide your API key: Properties ``` management.metrics.export.datadog.api-key=YOUR_KEY ``` Yaml ``` management: metrics: export: datadog: api-key: "YOUR_KEY" ``` You can also change the interval at which metrics are sent to Datadog: Properties ``` management.metrics.export.datadog.step=30s ``` Yaml ``` management: metrics: export: datadog: step: "30s" ``` #### 6.2.4. Dynatrace Dynatrace offers two metrics ingest APIs, both of which are implemented for [Micrometer](https://micrometer.io/docs/registry/dynatrace). Configuration properties in the `v1` namespace apply only when exporting to the [Timeseries v1 API](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v1/). Configuration properties in the `v2` namespace apply only when exporting to the [Metrics v2 API](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/). Note that this integration can export only to either the `v1` or `v2` version of the API at a time. If the `device-id` (required for v1 but not used in v2) is set in the `v1` namespace, metrics are exported to the `v1` endpoint. Otherwise, `v2` is assumed. ##### v2 API You can use the v2 API in two ways. If a local OneAgent is running on the host, metrics are automatically exported to the [local OneAgent ingest endpoint](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/local-api/). The ingest endpoint forwards the metrics to the Dynatrace backend. This is the default behavior and requires no special setup beyond a dependency on `io.micrometer:micrometer-registry-dynatrace`. If no local OneAgent is running, the endpoint of the [Metrics v2 API](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/) and an API token are required. The [API token](https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/) must have the “Ingest metrics” (`metrics.ingest`) permission set. We recommend limiting the scope of the token to this one permission. You must ensure that the endpoint URI contains the path (for example, `/api/v2/metrics/ingest`): The URL of the Metrics API v2 ingest endpoint is different according to your deployment option: * SaaS: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest` * Managed deployments: `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest` The example below configures metrics export using the `example` environment id: Properties ``` management.metrics.export.dynatrace.uri=https://example.live.dynatrace.com/api/v2/metrics/ingest management.metrics.export.dynatrace.api-token=YOUR_TOKEN ``` Yaml ``` management: metrics: export: dynatrace: uri: "https://example.live.dynatrace.com/api/v2/metrics/ingest" api-token: "YOUR_TOKEN" ``` When using the Dynatrace v2 API, the following optional features are available: * Metric key prefix: Sets a prefix that is prepended to all exported metric keys. * Enrich with Dynatrace metadata: If a OneAgent or Dynatrace operator is running, enrich metrics with additional metadata (for example, about the host, process, or pod). * Default dimensions: Specify key-value pairs that are added to all exported metrics. If tags with the same key are specified with Micrometer, they overwrite the default dimensions. It is possible to not specify a URI and API token, as shown in the following example. In this scenario, the local OneAgent endpoint is used: Properties ``` management.metrics.export.dynatrace.v2.metric-key-prefix=your.key.prefix management.metrics.export.dynatrace.v2.enrich-with-dynatrace-metadata=true management.metrics.export.dynatrace.v2.default-dimensions.key1=value1 management.metrics.export.dynatrace.v2.default-dimensions.key2=value2 ``` Yaml ``` management: metrics: export: dynatrace: # Specify uri and api-token here if not using the local OneAgent endpoint. v2: metric-key-prefix: "your.key.prefix" enrich-with-dynatrace-metadata: true default-dimensions: key1: "value1" key2: "value2" ``` ##### v1 API (Legacy) The Dynatrace v1 API metrics registry pushes metrics to the configured URI periodically by using the [Timeseries v1 API](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v1/). For backwards-compatibility with existing setups, when `device-id` is set (required for v1, but not used in v2), metrics are exported to the Timeseries v1 endpoint. To export metrics to [Dynatrace](https://micrometer.io/docs/registry/dynatrace), your API token, device ID, and URI must be provided: Properties ``` management.metrics.export.dynatrace.uri=https://{your-environment-id}.live.dynatrace.com management.metrics.export.dynatrace.api-token=YOUR_TOKEN management.metrics.export.dynatrace.v1.device-id=YOUR_DEVICE_ID ``` Yaml ``` management: metrics: export: dynatrace: uri: "https://{your-environment-id}.live.dynatrace.com" api-token: "YOUR_TOKEN" v1: device-id: "YOUR_DEVICE_ID" ``` For the v1 API, you must specify the base environment URI without a path, as the v1 endpoint path is added automatically. ##### Version-independent Settings In addition to the API endpoint and token, you can also change the interval at which metrics are sent to Dynatrace. The default export interval is `60s`. The following example sets the export interval to 30 seconds: Properties ``` management.metrics.export.dynatrace.step=30s ``` Yaml ``` management: metrics: export: dynatrace: step: "30s" ``` You can find more information on how to set up the Dynatrace exporter for Micrometer in [the Micrometer documentation](https://micrometer.io/docs/registry/dynatrace). #### 6.2.5. Elastic By default, metrics are exported to [Elastic](https://micrometer.io/docs/registry/elastic) running on your local machine. You can provide the location of the Elastic server to use by using the following property: Properties ``` management.metrics.export.elastic.host=https://elastic.example.com:8086 ``` Yaml ``` management: metrics: export: elastic: host: "https://elastic.example.com:8086" ``` #### 6.2.6. Ganglia By default, metrics are exported to [Ganglia](https://micrometer.io/docs/registry/ganglia) running on your local machine. You can provide the [Ganglia server](http://ganglia.sourceforge.net) host and port, as the following example shows: Properties ``` management.metrics.export.ganglia.host=ganglia.example.com management.metrics.export.ganglia.port=9649 ``` Yaml ``` management: metrics: export: ganglia: host: "ganglia.example.com" port: 9649 ``` #### 6.2.7. Graphite By default, metrics are exported to [Graphite](https://micrometer.io/docs/registry/graphite) running on your local machine. You can provide the [Graphite server](https://graphiteapp.org) host and port, as the following example shows: Properties ``` management.metrics.export.graphite.host=graphite.example.com management.metrics.export.graphite.port=9004 ``` Yaml ``` management: metrics: export: graphite: host: "graphite.example.com" port: 9004 ``` Micrometer provides a default `HierarchicalNameMapper` that governs how a dimensional meter ID is [mapped to flat hierarchical names](https://micrometer.io/docs/registry/graphite#_hierarchical_name_mapping). | |To take control over this behavior, define your `GraphiteMeterRegistry` and supply your own `HierarchicalNameMapper`.
An auto-configured `GraphiteConfig` and `Clock` beans are provided unless you define your own:

```
import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.Meter;
import io.micrometer.core.instrument.config.NamingConvention;
import io.micrometer.core.instrument.util.HierarchicalNameMapper;
import io.micrometer.graphite.GraphiteConfig;
import io.micrometer.graphite.GraphiteMeterRegistry;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration(proxyBeanMethods = false)
public class MyGraphiteConfiguration {

@Bean
public GraphiteMeterRegistry graphiteMeterRegistry(GraphiteConfig config, Clock clock) {
return new GraphiteMeterRegistry(config, clock, this::toHierarchicalName);
}

private String toHierarchicalName(Meter.Id id, NamingConvention convention) {
return ...
}

}

```| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.2.8. Humio By default, the Humio registry periodically pushes metrics to [cloud.humio.com](https://cloud.humio.com). To export metrics to SaaS [Humio](https://micrometer.io/docs/registry/humio), you must provide your API token: Properties ``` management.metrics.export.humio.api-token=YOUR_TOKEN ``` Yaml ``` management: metrics: export: humio: api-token: "YOUR_TOKEN" ``` You should also configure one or more tags to identify the data source to which metrics are pushed: Properties ``` management.metrics.export.humio.tags.alpha=a management.metrics.export.humio.tags.bravo=b ``` Yaml ``` management: metrics: export: humio: tags: alpha: "a" bravo: "b" ``` #### 6.2.9. Influx By default, metrics are exported to an [Influx](https://micrometer.io/docs/registry/influx) v1 instance running on your local machine with the default configuration. To export metrics to InfluxDB v2, configure the `org`, `bucket`, and authentication `token` for writing metrics. You can provide the location of the [Influx server](https://www.influxdata.com) to use by using: Properties ``` management.metrics.export.influx.uri=https://influx.example.com:8086 ``` Yaml ``` management: metrics: export: influx: uri: "https://influx.example.com:8086" ``` #### 6.2.10. JMX Micrometer provides a hierarchical mapping to [JMX](https://micrometer.io/docs/registry/jmx), primarily as a cheap and portable way to view metrics locally. By default, metrics are exported to the `metrics` JMX domain. You can provide the domain to use by using: Properties ``` management.metrics.export.jmx.domain=com.example.app.metrics ``` Yaml ``` management: metrics: export: jmx: domain: "com.example.app.metrics" ``` Micrometer provides a default `HierarchicalNameMapper` that governs how a dimensional meter ID is [mapped to flat hierarchical names](https://micrometer.io/docs/registry/jmx#_hierarchical_name_mapping). | |To take control over this behavior, define your `JmxMeterRegistry` and supply your own `HierarchicalNameMapper`.
An auto-configured `JmxConfig` and `Clock` beans are provided unless you define your own:

```
import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.Meter;
import io.micrometer.core.instrument.config.NamingConvention;
import io.micrometer.core.instrument.util.HierarchicalNameMapper;
import io.micrometer.jmx.JmxConfig;
import io.micrometer.jmx.JmxMeterRegistry;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration(proxyBeanMethods = false)
public class MyJmxConfiguration {

@Bean
public JmxMeterRegistry jmxMeterRegistry(JmxConfig config, Clock clock) {
return new JmxMeterRegistry(config, clock, this::toHierarchicalName);
}

private String toHierarchicalName(Meter.Id id, NamingConvention convention) {
return ...
}

}

```| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.2.11. KairosDB By default, metrics are exported to [KairosDB](https://micrometer.io/docs/registry/kairos) running on your local machine. You can provide the location of the [KairosDB server](https://kairosdb.github.io/) to use by using: Properties ``` management.metrics.export.kairos.uri=https://kairosdb.example.com:8080/api/v1/datapoints ``` Yaml ``` management: metrics: export: kairos: uri: "https://kairosdb.example.com:8080/api/v1/datapoints" ``` #### 6.2.12. New Relic A New Relic registry periodically pushes metrics to [New Relic](https://micrometer.io/docs/registry/new-relic). To export metrics to [New Relic](https://newrelic.com), you must provide your API key and account ID: Properties ``` management.metrics.export.newrelic.api-key=YOUR_KEY management.metrics.export.newrelic.account-id=YOUR_ACCOUNT_ID ``` Yaml ``` management: metrics: export: newrelic: api-key: "YOUR_KEY" account-id: "YOUR_ACCOUNT_ID" ``` You can also change the interval at which metrics are sent to New Relic: Properties ``` management.metrics.export.newrelic.step=30s ``` Yaml ``` management: metrics: export: newrelic: step: "30s" ``` By default, metrics are published through REST calls, but you can also use the Java Agent API if you have it on the classpath: Properties ``` management.metrics.export.newrelic.client-provider-type=insights-agent ``` Yaml ``` management: metrics: export: newrelic: client-provider-type: "insights-agent" ``` Finally, you can take full control by defining your own `NewRelicClientProvider` bean. #### 6.2.13. Prometheus [Prometheus](https://micrometer.io/docs/registry/prometheus) expects to scrape or poll individual application instances for metrics. Spring Boot provides an actuator endpoint at `/actuator/prometheus` to present a [Prometheus scrape](https://prometheus.io) with the appropriate format. | |By default, the endpoint is not available and must be exposed. See [exposing endpoints](#actuator.endpoints.exposing) for more details.| |---|---------------------------------------------------------------------------------------------------------------------------------------| The following example `scrape_config` adds to `prometheus.yml`: ``` scrape_configs: - job_name: "spring" metrics_path: "/actuator/prometheus" static_configs: - targets: ["HOST:PORT"] ``` For ephemeral or batch jobs that may not exist long enough to be scraped, you can use [Prometheus Pushgateway](https://github.com/prometheus/pushgateway) support to expose the metrics to Prometheus. To enable Prometheus Pushgateway support, add the following dependency to your project: ``` io.prometheus simpleclient_pushgateway ``` When the Prometheus Pushgateway dependency is present on the classpath and the `management.metrics.export.prometheus.pushgateway.enabled` property is set to `true`, a `PrometheusPushGatewayManager` bean is auto-configured. This manages the pushing of metrics to a Prometheus Pushgateway. You can tune the `PrometheusPushGatewayManager` by using properties under `management.metrics.export.prometheus.pushgateway`. For advanced configuration, you can also provide your own `PrometheusPushGatewayManager` bean. #### 6.2.14. SignalFx SignalFx registry periodically pushes metrics to [SignalFx](https://micrometer.io/docs/registry/signalFx). To export metrics to [SignalFx](https://www.signalfx.com), you must provide your access token: Properties ``` management.metrics.export.signalfx.access-token=YOUR_ACCESS_TOKEN ``` Yaml ``` management: metrics: export: signalfx: access-token: "YOUR_ACCESS_TOKEN" ``` You can also change the interval at which metrics are sent to SignalFx: Properties ``` management.metrics.export.signalfx.step=30s ``` Yaml ``` management: metrics: export: signalfx: step: "30s" ``` #### 6.2.15. Simple Micrometer ships with a simple, in-memory backend that is automatically used as a fallback if no other registry is configured. This lets you see what metrics are collected in the [metrics endpoint](#actuator.metrics.endpoint). The in-memory backend disables itself as soon as you use any other available backend. You can also disable it explicitly: Properties ``` management.metrics.export.simple.enabled=false ``` Yaml ``` management: metrics: export: simple: enabled: false ``` #### 6.2.16. Stackdriver The Stackdriver registry periodically pushes metrics to [Stackdriver](https://cloud.google.com/stackdriver/). To export metrics to SaaS [Stackdriver](https://micrometer.io/docs/registry/stackdriver), you must provide your Google Cloud project ID: Properties ``` management.metrics.export.stackdriver.project-id=my-project ``` Yaml ``` management: metrics: export: stackdriver: project-id: "my-project" ``` You can also change the interval at which metrics are sent to Stackdriver: Properties ``` management.metrics.export.stackdriver.step=30s ``` Yaml ``` management: metrics: export: stackdriver: step: "30s" ``` #### 6.2.17. StatsD The StatsD registry eagerly pushes metrics over UDP to a StatsD agent. By default, metrics are exported to a [StatsD](https://micrometer.io/docs/registry/statsD) agent running on your local machine. You can provide the StatsD agent host, port, and protocol to use by using: Properties ``` management.metrics.export.statsd.host=statsd.example.com management.metrics.export.statsd.port=9125 management.metrics.export.statsd.protocol=udp ``` Yaml ``` management: metrics: export: statsd: host: "statsd.example.com" port: 9125 protocol: "udp" ``` You can also change the StatsD line protocol to use (it defaults to Datadog): Properties ``` management.metrics.export.statsd.flavor=etsy ``` Yaml ``` management: metrics: export: statsd: flavor: "etsy" ``` #### 6.2.18. Wavefront The Wavefront registry periodically pushes metrics to [Wavefront](https://micrometer.io/docs/registry/wavefront). If you are exporting metrics to [Wavefront](https://www.wavefront.com/) directly, you must provide your API token: Properties ``` management.metrics.export.wavefront.api-token=YOUR_API_TOKEN ``` Yaml ``` management: metrics: export: wavefront: api-token: "YOUR_API_TOKEN" ``` Alternatively, you can use a Wavefront sidecar or an internal proxy in your environment to forward metrics data to the Wavefront API host: Properties ``` management.metrics.export.wavefront.uri=proxy://localhost:2878 ``` Yaml ``` management: metrics: export: wavefront: uri: "proxy://localhost:2878" ``` | |If you publish metrics to a Wavefront proxy (as described in [the Wavefront documentation](https://docs.wavefront.com/proxies_installing.html)), the host must be in the `proxy://HOST:PORT` format.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| You can also change the interval at which metrics are sent to Wavefront: Properties ``` management.metrics.export.wavefront.step=30s ``` Yaml ``` management: metrics: export: wavefront: step: "30s" ``` ### 6.3. Supported Metrics and Meters Spring Boot provides automatic meter registration for a wide variety of technologies. In most situations, the defaults provide sensible metrics that can be published to any of the supported monitoring systems. #### 6.3.1. JVM Metrics Auto-configuration enables JVM Metrics by using core Micrometer classes. JVM metrics are published under the `jvm.` meter name. The following JVM metrics are provided: * Various memory and buffer pool details * Statistics related to garbage collection * Thread utilization * The number of classes loaded and unloaded #### 6.3.2. System Metrics Auto-configuration enables system metrics by using core Micrometer classes. System metrics are published under the `system.`, `process.`, and `disk.` meter names. The following system metrics are provided: * CPU metrics * File descriptor metrics * Uptime metrics (both the amount of time the application has been running and a fixed gauge of the absolute start time) * Disk space available #### 6.3.3. Application Startup Metrics Auto-configuration exposes application startup time metrics: * `application.started.time`: time taken to start the application. * `application.ready.time`: time taken for the application to be ready to service requests. Metrics are tagged by the fully qualified name of the application class. #### 6.3.4. Logger Metrics Auto-configuration enables the event metrics for both Logback and Log4J2. The details are published under the `log4j2.events.` or `logback.events.` meter names. #### 6.3.5. Task Execution and Scheduling Metrics Auto-configuration enables the instrumentation of all available `ThreadPoolTaskExecutor` and `ThreadPoolTaskScheduler` beans, as long as the underling `ThreadPoolExecutor` is available. Metrics are tagged by the name of the executor, which is derived from the bean name. #### 6.3.6. Spring MVC Metrics Auto-configuration enables the instrumentation of all requests handled by Spring MVC controllers and functional handlers. By default, metrics are generated with the name, `http.server.requests`. You can customized the name by setting the `management.metrics.web.server.request.metric-name` property. `@Timed` annotations are supported on `@Controller` classes and `@RequestMapping` methods (see [@Timed Annotation Support](#actuator.metrics.supported.timed-annotation) for details). If you do not want to record metrics for all Spring MVC requests, you can set `management.metrics.web.server.request.autotime.enabled` to `false` and exclusively use `@Timed` annotations instead. By default, Spring MVC related metrics are tagged with the following information: | Tag | Description | |-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |`exception`| The simple class name of any exception that was thrown while handling the request. | | `method` | The request’s method (for example, `GET` or `POST`) | | `outcome` |The request’s outcome, based on the status code of the response.
1xx is `INFORMATIONAL`, 2xx is `SUCCESS`, 3xx is `REDIRECTION`, 4xx is `CLIENT_ERROR`, and 5xx is `SERVER_ERROR`| | `status` | The response’s HTTP status code (for example, `200` or `500`) | | `uri` | The request’s URI template prior to variable substitution, if possible (for example, `/api/person/{id}`) | To add to the default tags, provide one or more `@Bean`s that implement `WebMvcTagsContributor`. To replace the default tags, provide a `@Bean` that implements `WebMvcTagsProvider`. | |In some cases, exceptions handled in web controllers are not recorded as request metrics tags.
Applications can opt in and record exceptions by [setting handled exceptions as request attributes](web.html#web.servlet.spring-mvc.error-handling).| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.3.7. Spring WebFlux Metrics Auto-configuration enables the instrumentation of all requests handled by Spring WebFlux controllers and functional handlers. By default, metrics are generated with the name, `http.server.requests`. You can customize the name by setting the `management.metrics.web.server.request.metric-name` property. `@Timed` annotations are supported on `@Controller` classes and `@RequestMapping` methods (see [@Timed Annotation Support](#actuator.metrics.supported.timed-annotation) for details). If you do not want to record metrics for all Spring WebFlux requests, you can set `management.metrics.web.server.request.autotime.enabled` to `false` and exclusively use `@Timed` annotations instead. By default, WebFlux related metrics are tagged with the following information: | Tag | Description | |-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |`exception`| The simple class name of any exception that was thrown while handling the request. | | `method` | The request’s method (for example, `GET` or `POST`) | | `outcome` |The request’s outcome, based on the status code of the response.
1xx is `INFORMATIONAL`, 2xx is `SUCCESS`, 3xx is `REDIRECTION`, 4xx is `CLIENT_ERROR`, and 5xx is `SERVER_ERROR`| | `status` | The response’s HTTP status code (for example, `200` or `500`) | | `uri` | The request’s URI template prior to variable substitution, if possible (for example, `/api/person/{id}`) | To add to the default tags, provide one or more beans that implement `WebFluxTagsContributor`. To replace the default tags, provide a bean that implements `WebFluxTagsProvider`. | |In some cases, exceptions handled in controllers and handler functions are not recorded as request metrics tags.
Applications can opt in and record exceptions by [setting handled exceptions as request attributes](web.html#web.reactive.webflux.error-handling).| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.3.8. Jersey Server Metrics Auto-configuration enables the instrumentation of all requests handled by the Jersey JAX-RS implementation. By default, metrics are generated with the name, `http.server.requests`. You can customize the name by setting the `management.metrics.web.server.request.metric-name` property. `@Timed` annotations are supported on request-handling classes and methods (see [@Timed Annotation Support](#actuator.metrics.supported.timed-annotation) for details). If you do not want to record metrics for all Jersey requests, you can set `management.metrics.web.server.request.autotime.enabled` to `false` and exclusively use `@Timed` annotations instead. By default, Jersey server metrics are tagged with the following information: | Tag | Description | |-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |`exception`| The simple class name of any exception that was thrown while handling the request. | | `method` | The request’s method (for example, `GET` or `POST`) | | `outcome` |The request’s outcome, based on the status code of the response.
1xx is `INFORMATIONAL`, 2xx is `SUCCESS`, 3xx is `REDIRECTION`, 4xx is `CLIENT_ERROR`, and 5xx is `SERVER_ERROR`| | `status` | The response’s HTTP status code (for example, `200` or `500`) | | `uri` | The request’s URI template prior to variable substitution, if possible (for example, `/api/person/{id}`) | To customize the tags, provide a `@Bean` that implements `JerseyTagsProvider`. #### 6.3.9. HTTP Client Metrics Spring Boot Actuator manages the instrumentation of both `RestTemplate` and `WebClient`. For that, you have to inject the auto-configured builder and use it to create instances: * `RestTemplateBuilder` for `RestTemplate` * `WebClient.Builder` for `WebClient` You can also manually apply the customizers responsible for this instrumentation, namely `MetricsRestTemplateCustomizer` and `MetricsWebClientCustomizer`. By default, metrics are generated with the name, `http.client.requests`. You can customize the name by setting the `management.metrics.web.client.request.metric-name` property. By default, metrics generated by an instrumented client are tagged with the following information: | Tag | Description | |------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |`clientName`| The host portion of the URI | | `method` | The request’s method (for example, `GET` or `POST`) | | `outcome` |The request’s outcome, based on the status code of the response.
1xx is `INFORMATIONAL`, 2xx is `SUCCESS`, 3xx is `REDIRECTION`, 4xx is `CLIENT_ERROR`, and 5xx is `SERVER_ERROR`. Otherwise, it is `UNKNOWN`.| | `status` | The response’s HTTP status code if available (for example, `200` or `500`) or `IO_ERROR` in case of I/O issues. Otherwise, it is `CLIENT_ERROR`. | | `uri` | The request’s URI template prior to variable substitution, if possible (for example, `/api/person/{id}`) | To customize the tags, and depending on your choice of client, you can provide a `@Bean` that implements `RestTemplateExchangeTagsProvider` or `WebClientExchangeTagsProvider`. There are convenience static functions in `RestTemplateExchangeTags` and `WebClientExchangeTags`. #### 6.3.10. Tomcat Metrics Auto-configuration enables the instrumentation of Tomcat only when an `MBeanRegistry` is enabled. By default, the `MBeanRegistry` is disabled, but you can enable it by setting `server.tomcat.mbeanregistry.enabled` to `true`. Tomcat metrics are published under the `tomcat.` meter name. #### 6.3.11. Cache Metrics Auto-configuration enables the instrumentation of all available `Cache` instances on startup, with metrics prefixed with `cache`. Cache instrumentation is standardized for a basic set of metrics. Additional, cache-specific metrics are also available. The following cache libraries are supported: * Caffeine * EhCache 2 * Hazelcast * Any compliant JCache (JSR-107) implementation * Redis Metrics are tagged by the name of the cache and by the name of the `CacheManager`, which is derived from the bean name. | |Only caches that are configured on startup are bound to the registry.
For caches not defined in the cache’s configuration, such as caches created on the fly or programmatically after the startup phase, an explicit registration is required.
A `CacheMetricsRegistrar` bean is made available to make that process easier.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.3.12. DataSource Metrics Auto-configuration enables the instrumentation of all available `DataSource` objects with metrics prefixed with `jdbc.connections`. Data source instrumentation results in gauges that represent the currently active, idle, maximum allowed, and minimum allowed connections in the pool. Metrics are also tagged by the name of the `DataSource` computed based on the bean name. | |By default, Spring Boot provides metadata for all supported data sources.
You can add additional `DataSourcePoolMetadataProvider` beans if your favorite data source is not supported.
See `DataSourcePoolMetadataProvidersConfiguration` for examples.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Also, Hikari-specific metrics are exposed with a `hikaricp` prefix. Each metric is tagged by the name of the pool (you can control it with `spring.datasource.name`). #### 6.3.13. Hibernate Metrics If `org.hibernate:hibernate-micrometer` is on the classpath, all available Hibernate `EntityManagerFactory` instances that have statistics enabled are instrumented with a metric named `hibernate`. Metrics are also tagged by the name of the `EntityManagerFactory`, which is derived from the bean name. To enable statistics, the standard JPA property `hibernate.generate_statistics` must be set to `true`. You can enable that on the auto-configured `EntityManagerFactory`: Properties ``` spring.jpa.properties[hibernate.generate_statistics]=true ``` Yaml ``` spring: jpa: properties: "[hibernate.generate_statistics]": true ``` #### 6.3.14. Spring Data Repository Metrics Auto-configuration enables the instrumentation of all Spring Data `Repository` method invocations. By default, metrics are generated with the name, `spring.data.repository.invocations`. You can customize the name by setting the `management.metrics.data.repository.metric-name` property. `@Timed` annotations are supported on `Repository` classes and methods (see [@Timed Annotation Support](#actuator.metrics.supported.timed-annotation) for details). If you do not want to record metrics for all `Repository` invocations, you can set `management.metrics.data.repository.autotime.enabled` to `false` and exclusively use `@Timed` annotations instead. By default, repository invocation related metrics are tagged with the following information: | Tag | Description | |------------|---------------------------------------------------------------------------| |`repository`| The simple class name of the source `Repository`. | | `method` | The name of the `Repository` method that was invoked. | | `state` | The result state (`SUCCESS`, `ERROR`, `CANCELED`, or `RUNNING`). | |`exception` |The simple class name of any exception that was thrown from the invocation.| To replace the default tags, provide a `@Bean` that implements `RepositoryTagsProvider`. #### 6.3.15. RabbitMQ Metrics Auto-configuration enables the instrumentation of all available RabbitMQ connection factories with a metric named `rabbitmq`. #### 6.3.16. Spring Integration Metrics Spring Integration automatically provides [Micrometer support](https://docs.spring.io/spring-integration/docs/5.5.9/reference/html/system-management.html#micrometer-integration) whenever a `MeterRegistry` bean is available. Metrics are published under the `spring.integration.` meter name. #### 6.3.17. Kafka Metrics Auto-configuration registers a `MicrometerConsumerListener` and `MicrometerProducerListener` for the auto-configured consumer factory and producer factory, respectively. It also registers a `KafkaStreamsMicrometerListener` for `StreamsBuilderFactoryBean`. For more detail, see the [Micrometer Native Metrics](https://docs.spring.io/spring-kafka/docs/2.8.3/reference/html/#micrometer-native) section of the Spring Kafka documentation. #### 6.3.18. MongoDB Metrics This section briefly describes the available metrics for MongoDB. ##### MongoDB Command Metrics Auto-configuration registers a `MongoMetricsCommandListener` with the auto-configured `MongoClient`. A timer metric named `mongodb.driver.commands` is created for each command issued to the underlying MongoDB driver. Each metric is tagged with the following information by default: | Tag | Description | |----------------|------------------------------------------------------------| | `command` | The name of the command issued. | | `cluster.id` |The identifier of the cluster to which the command was sent.| |`server.address`| The address of the server to which the command was sent. | | `status` | The outcome of the command (`SUCCESS` or `FAILED`). | To replace the default metric tags, define a `MongoCommandTagsProvider` bean, as the following example shows: ``` import io.micrometer.core.instrument.binder.mongodb.MongoCommandTagsProvider; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration(proxyBeanMethods = false) public class MyCommandTagsProviderConfiguration { @Bean public MongoCommandTagsProvider customCommandTagsProvider() { return new CustomCommandTagsProvider(); } } ``` To disable the auto-configured command metrics, set the following property: Properties ``` management.metrics.mongo.command.enabled=false ``` Yaml ``` management: metrics: mongo: command: enabled: false ``` ##### MongoDB Connection Pool Metrics Auto-configuration registers a `MongoMetricsConnectionPoolListener` with the auto-configured `MongoClient`. The following gauge metrics are created for the connection pool: * `mongodb.driver.pool.size` reports the current size of the connection pool, including idle and and in-use members. * `mongodb.driver.pool.checkedout` reports the count of connections that are currently in use. * `mongodb.driver.pool.waitqueuesize` reports the current size of the wait queue for a connection from the pool. Each metric is tagged with the following information by default: | Tag | Description | |----------------|-----------------------------------------------------------------------| | `cluster.id` |The identifier of the cluster to which the connection pool corresponds.| |`server.address`| The address of the server to which the connection pool corresponds. | To replace the default metric tags, define a `MongoConnectionPoolTagsProvider` bean: ``` import io.micrometer.core.instrument.binder.mongodb.MongoConnectionPoolTagsProvider; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration(proxyBeanMethods = false) public class MyConnectionPoolTagsProviderConfiguration { @Bean public MongoConnectionPoolTagsProvider customConnectionPoolTagsProvider() { return new CustomConnectionPoolTagsProvider(); } } ``` To disable the auto-configured connection pool metrics, set the following property: Properties ``` management.metrics.mongo.connectionpool.enabled=false ``` Yaml ``` management: metrics: mongo: connectionpool: enabled: false ``` #### 6.3.19. Jetty Metrics Auto-configuration binds metrics for Jetty’s `ThreadPool` by using Micrometer’s `JettyServerThreadPoolMetrics`. Metrics for Jetty’s `Connector` instances are bound by using Micrometer’s `JettyConnectionMetrics` and, when `server.ssl.enabled` is set to `true`, Micrometer’s `JettySslHandshakeMetrics`. #### 6.3.20. @Timed Annotation Support You can use the `@Timed` annotation from the `io.micrometer.core.annotation` package with several of the supported technologies described earlier. If supported, you can use the annotation at either the class level or the method level. For example, the following code shows how you can use the annotation to instrument all request mappings in a `@RestController`: ``` import java.util.List; import io.micrometer.core.annotation.Timed; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController @Timed public class MyController { @GetMapping("/api/addresses") public List
listAddress() { return ... } @GetMapping("/api/people") public List listPeople() { return ... } } ``` If you want only to instrument a single mapping, you can use the annotation on the method instead of the class: ``` import java.util.List; import io.micrometer.core.annotation.Timed; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class MyController { @GetMapping("/api/addresses") public List
listAddress() { return ... } @GetMapping("/api/people") @Timed public List listPeople() { return ... } } ``` You can also combine class-level and method-level annotations if you want to change the timing details for a specific method: ``` import java.util.List; import io.micrometer.core.annotation.Timed; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController @Timed public class MyController { @GetMapping("/api/addresses") public List
listAddress() { return ... } @GetMapping("/api/people") @Timed(extraTags = { "region", "us-east-1" }) @Timed(value = "all.people", longTask = true) public List listPeople() { return ... } } ``` | |A `@Timed` annotation with `longTask = true` enables a long task timer for the method.
Long task timers require a separate metric name and can be stacked with a short task timer.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.3.21. Redis Metrics Auto-configuration registers a `MicrometerCommandLatencyRecorder` for the auto-configured `LettuceConnectionFactory`. For more detail, see the [Micrometer Metrics section](https://lettuce.io/core/6.1.6.RELEASE/reference/index.html#command.latency.metrics.micrometer) of the Lettuce documentation. ### 6.4. Registering Custom Metrics To register custom metrics, inject `MeterRegistry` into your component: ``` import io.micrometer.core.instrument.MeterRegistry; import io.micrometer.core.instrument.Tags; import org.springframework.stereotype.Component; @Component public class MyBean { private final Dictionary dictionary; public MyBean(MeterRegistry registry) { this.dictionary = Dictionary.load(); registry.gauge("dictionary.size", Tags.empty(), this.dictionary.getWords().size()); } } ``` If your metrics depend on other beans, we recommend that you use a `MeterBinder` to register them: ``` import io.micrometer.core.instrument.Gauge; import io.micrometer.core.instrument.binder.MeterBinder; import org.springframework.context.annotation.Bean; public class MyMeterBinderConfiguration { @Bean public MeterBinder queueSize(Queue queue) { return (registry) -> Gauge.builder("queueSize", queue::size).register(registry); } } ``` Using a `MeterBinder` ensures that the correct dependency relationships are set up and that the bean is available when the metric’s value is retrieved. A `MeterBinder` implementation can also be useful if you find that you repeatedly instrument a suite of metrics across components or applications. | |By default, metrics from all `MeterBinder` beans are automatically bound to the Spring-managed `MeterRegistry`.| |---|---------------------------------------------------------------------------------------------------------------| ### 6.5. Customizing Individual Metrics If you need to apply customizations to specific `Meter` instances, you can use the `io.micrometer.core.instrument.config.MeterFilter` interface. For example, if you want to rename the `mytag.region` tag to `mytag.area` for all meter IDs beginning with `com.example`, you can do the following: ``` import io.micrometer.core.instrument.config.MeterFilter; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration(proxyBeanMethods = false) public class MyMetricsFilterConfiguration { @Bean public MeterFilter renameRegionTagMeterFilter() { return MeterFilter.renameTag("com.example", "mytag.region", "mytag.area"); } } ``` | |By default, all `MeterFilter` beans are automatically bound to the Spring-managed `MeterRegistry`.
Make sure to register your metrics by using the Spring-managed `MeterRegistry` and not any of the static methods on `Metrics`.
These use the global registry that is not Spring-managed.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.5.1. Common Tags Common tags are generally used for dimensional drill-down on the operating environment, such as host, instance, region, stack, and others. Commons tags are applied to all meters and can be configured, as the following example shows: Properties ``` management.metrics.tags.region=us-east-1 management.metrics.tags.stack=prod ``` Yaml ``` management: metrics: tags: region: "us-east-1" stack: "prod" ``` The preceding example adds `region` and `stack` tags to all meters with a value of `us-east-1` and `prod`, respectively. | |The order of common tags is important if you use Graphite.
As the order of common tags cannot be guaranteed by using this approach, Graphite users are advised to define a custom `MeterFilter` instead.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| #### 6.5.2. Per-meter Properties In addition to `MeterFilter` beans, you can apply a limited set of customization on a per-meter basis by using properties. Per-meter customizations apply to any meter IDs that start with the given name. The following example disables any meters that have an ID starting with `example.remote` Properties ``` management.metrics.enable.example.remote=false ``` Yaml ``` management: metrics: enable: example: remote: false ``` The following properties allow per-meter customization: | Property | Description | |------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------| | `management.metrics.enable` | Whether to prevent meters from emitting any metrics. | | `management.metrics.distribution.percentiles-histogram` | Whether to publish a histogram suitable for computing aggregable (across dimension) percentile approximations. | |`management.metrics.distribution.minimum-expected-value`, `management.metrics.distribution.maximum-expected-value`| Publish fewer histogram buckets by clamping the range of expected values. | | `management.metrics.distribution.percentiles` | Publish percentile values computed in your application | | `management.metrics.distribution.expiry`, `management.metrics.distribution.buffer-length` |Give greater weight to recent samples by accumulating them in ring buffers which rotate after a configurable expiry, with a
configurable buffer length.| | `management.metrics.distribution.slo` | Publish a cumulative histogram with buckets defined by your service-level objectives. | For more details on the concepts behind `percentiles-histogram`, `percentiles`, and `slo`, see the [“Histograms and percentiles” section](https://micrometer.io/docs/concepts#_histograms_and_percentiles) of the Micrometer documentation. ### 6.6. Metrics Endpoint Spring Boot provides a `metrics` endpoint that you can use diagnostically to examine the metrics collected by an application. The endpoint is not available by default and must be exposed. See [exposing endpoints](#actuator.endpoints.exposing) for more details. Navigating to `/actuator/metrics` displays a list of available meter names. You can drill down to view information about a particular meter by providing its name as a selector — for example, `/actuator/metrics/jvm.memory.max`. | |The name you use here should match the name used in the code, not the name after it has been naming-convention normalized for a monitoring system to which it is shipped.
In other words, if `jvm.memory.max` appears as `jvm_memory_max` in Prometheus because of its snake case naming convention, you should still use `jvm.memory.max` as the selector when inspecting the meter in the `metrics` endpoint.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| You can also add any number of `tag=KEY:VALUE` query parameters to the end of the URL to dimensionally drill down on a meter — for example, `/actuator/metrics/jvm.memory.max?tag=area:nonheap`. | |The reported measurements are the *sum* of the statistics of all meters that match the meter name and any tags that have been applied.
In the preceding example, the returned `Value` statistic is the sum of the maximum memory footprints of the “Code Cache”, “Compressed Class Space”, and “Metaspace” areas of the heap.
If you wanted to see only the maximum size for the “Metaspace”, you could add an additional `tag=id:Metaspace` — that is, `/actuator/metrics/jvm.memory.max?tag=area:nonheap&tag=id:Metaspace`.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## 7. Auditing Once Spring Security is in play, Spring Boot Actuator has a flexible audit framework that publishes events (by default, “authentication success”, “failure” and “access denied” exceptions). This feature can be very useful for reporting and for implementing a lock-out policy based on authentication failures. You can enable auditing by providing a bean of type `AuditEventRepository` in your application’s configuration. For convenience, Spring Boot offers an `InMemoryAuditEventRepository`.`InMemoryAuditEventRepository` has limited capabilities, and we recommend using it only for development environments. For production environments, consider creating your own alternative `AuditEventRepository` implementation. ### 7.1. Custom Auditing To customize published security events, you can provide your own implementations of `AbstractAuthenticationAuditListener` and `AbstractAuthorizationAuditListener`. You can also use the audit services for your own business events. To do so, either inject the `AuditEventRepository` bean into your own components and use that directly or publish an `AuditApplicationEvent` with the Spring `ApplicationEventPublisher` (by implementing `ApplicationEventPublisherAware`). ## 8. HTTP Tracing You can enable HTTP Tracing by providing a bean of type `HttpTraceRepository` in your application’s configuration. For convenience, Spring Boot offers `InMemoryHttpTraceRepository`, which stores traces for the last 100 (the default) request-response exchanges.`InMemoryHttpTraceRepository` is limited compared to other tracing solutions, and we recommend using it only for development environments. For production environments, we recommend using a production-ready tracing or observability solution, such as Zipkin or Spring Cloud Sleuth. Alternatively, you can create your own `HttpTraceRepository`. You can use the `httptrace` endpoint to obtain information about the request-response exchanges that are stored in the `HttpTraceRepository`. ### 8.1. Custom HTTP tracing To customize the items that are included in each trace, use the `management.trace.http.include` configuration property. For advanced customization, consider registering your own `HttpExchangeTracer` implementation. ## 9. Process Monitoring In the `spring-boot` module, you can find two classes to create files that are often useful for process monitoring: * `ApplicationPidFileWriter` creates a file that contains the application PID (by default, in the application directory with a file name of `application.pid`). * `WebServerPortFileWriter` creates a file (or files) that contain the ports of the running web server (by default, in the application directory with a file name of `application.port`). By default, these writers are not activated, but you can enable them: * [By Extending Configuration](#actuator.process-monitoring.configuration) * [Programmatically Enabling Process Monitoring](#actuator.process-monitoring.programmatically) ### 9.1. Extending Configuration In the `META-INF/spring.factories` file, you can activate the listener (or listeners) that writes a PID file: ``` org.springframework.context.ApplicationListener=\ org.springframework.boot.context.ApplicationPidFileWriter,\ org.springframework.boot.web.context.WebServerPortFileWriter ``` ### 9.2. Programmatically Enabling Process Monitoring You can also activate a listener by invoking the `SpringApplication.addListeners(…​)` method and passing the appropriate `Writer` object. This method also lets you customize the file name and path in the `Writer` constructor. ## 10. Cloud Foundry Support Spring Boot’s actuator module includes additional support that is activated when you deploy to a compatible Cloud Foundry instance. The `/cloudfoundryapplication` path provides an alternative secured route to all `@Endpoint` beans. The extended support lets Cloud Foundry management UIs (such as the web application that you can use to view deployed applications) be augmented with Spring Boot actuator information. For example, an application status page can include full health information instead of the typical “running” or “stopped” status. | |The `/cloudfoundryapplication` path is not directly accessible to regular users.
To use the endpoint, you must pass a valid UAA token with the request.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------| ### 10.1. Disabling Extended Cloud Foundry Actuator Support If you want to fully disable the `/cloudfoundryapplication` endpoints, you can add the following setting to your `application.properties` file: Properties ``` management.cloudfoundry.enabled=false ``` Yaml ``` management: cloudfoundry: enabled: false ``` ### 10.2. Cloud Foundry Self-signed Certificates By default, the security verification for `/cloudfoundryapplication` endpoints makes SSL calls to various Cloud Foundry services. If your Cloud Foundry UAA or Cloud Controller services use self-signed certificates, you need to set the following property: Properties ``` management.cloudfoundry.skip-ssl-validation=true ``` Yaml ``` management: cloudfoundry: skip-ssl-validation: true ``` ### 10.3. Custom Context Path If the server’s context-path has been configured to anything other than `/`, the Cloud Foundry endpoints are not available at the root of the application. For example, if `server.servlet.context-path=/app`, Cloud Foundry endpoints are available at `/app/cloudfoundryapplication/*`. If you expect the Cloud Foundry endpoints to always be available at `/cloudfoundryapplication/*`, regardless of the server’s context-path, you need to explicitly configure that in your application. The configuration differs, depending on the web server in use. For Tomcat, you can add the following configuration: ``` import java.io.IOException; import java.util.Collections; import javax.servlet.GenericServlet; import javax.servlet.Servlet; import javax.servlet.ServletContainerInitializer; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import org.apache.catalina.Host; import org.apache.catalina.core.StandardContext; import org.apache.catalina.startup.Tomcat; import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory; import org.springframework.boot.web.servlet.ServletContextInitializer; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration(proxyBeanMethods = false) public class MyCloudFoundryConfiguration { @Bean public TomcatServletWebServerFactory servletWebServerFactory() { return new TomcatServletWebServerFactory() { @Override protected void prepareContext(Host host, ServletContextInitializer[] initializers) { super.prepareContext(host, initializers); StandardContext child = new StandardContext(); child.addLifecycleListener(new Tomcat.FixContextListener()); child.setPath("/cloudfoundryapplication"); ServletContainerInitializer initializer = getServletContextInitializer(getContextPath()); child.addServletContainerInitializer(initializer, Collections.emptySet()); child.setCrossContext(true); host.addChild(child); } }; } private ServletContainerInitializer getServletContextInitializer(String contextPath) { return (classes, context) -> { Servlet servlet = new GenericServlet() { @Override public void service(ServletRequest req, ServletResponse res) throws ServletException, IOException { ServletContext context = req.getServletContext().getContext(contextPath); context.getRequestDispatcher("/cloudfoundryapplication").forward(req, res); } }; context.addServlet("cloudfoundry", servlet).addMapping("/*"); }; } } ``` ## 11. What to Read Next You might want to read about graphing tools such as [Graphite](https://graphiteapp.org). Otherwise, you can continue on to read about [“deployment options”](deployment.html#deployment) or jump ahead for some in-depth information about Spring Boot’s [build tool plugins](build-tool-plugins.html#build-tool-plugins).