Data Normalization

The data normalization process is applied to any event received by Intelie Live that follows the raw format and has a related asset configured.

First step: Standard Channels configuration

To perform this process, one must define a set of Standard Channels, or Standard Curves, in which will be defined the mnemonic and the unit of measurement for each channel, independently of its source (well, rig or service company).

Second step: Asset configuration

Data filtering

The next step is to define, for each rig or each well, one or more possible sources for each channel. Only one source will be considered at a time to be used by the normalization process. Configurations of additional sources are only kept to make it easier to change the active source in the future.

Each source is defined by two levels of filters. The first filter mandatory, It is at the asset level and limits all the data that will be matched with this asset, as the image below shows.

Although generally a simple filter is used to match the data to the asset, complex filters can also be used, such as

raw_well_1_offshore | raw_well_1_onshore

or

raw_all_wells liverig__object->log_name:Time*

The second one is optional, and is defined for each of the source's mnemonics. It is important for disambiguation, specially in the case that different sources generate similar mnemonics.

An examples of that filter is:

liverig__object->name:Time*

Name conversion

Each source must obligatorily define a mnemonic. When the normalized event is generated, its mnemonic will be defined by the standard channel configuration.

Unit conversion

The unit conversion is automatic. When the event is received, if it defines a unit in the uom field, the value will be automatically converted to match the standard channel unit.

Take a look here to find out the supported units.

Clock synchronization

Whenever possible, the normalized events will have the adjusted_index_timestamp field. It represents the moment that the data point was generated on the source system, after some fixes are applied to the value that was reported by the service companies (more).

Third step: Using the data

All the data, both raw and normalized, are stored and can be used to generate any type of analysis. Using normalized data guarantee that any analysis can be reused on any other data.

Raw data is represented by events that contain, on their root, mnemonics as keys pointing to maps, that contain the unit of measurement (uom) and value for each of the measurements. Raw data events also contain some metadata fields, called liverig__index__timestamp, liverig__metadata and liverig__object.

Each event that represents raw data is transformed into a set of normalized data. Each of the normalized events contains a field called value and another called uom (unit of measure), from which their values represent the measurements converted to the standard unit of measurement. Each event also contains references to the raw data.

Although raw data has a fixed event format, the mnemonics, units of measurement and clock may vary depending on several factors, being influenced mainly by the service company where the data was collected from.

If it is not possible to convert a value, the errors field will describe any possible reasons

Since version 2.19.0, the normalization can be enabled or disabled for each asset, using the toggle shown on the image below.

Renormalization

Sometimes a re-normalization is needed, for example, to fix data generated with a wrong normalization configuration.

For this goal, there is a re-normalization option on the Configurations menu, which allows this process to be executed for a batch of assets simultaneously, and provides a full status of all the executions.

Starting at LiveRig 4.0 and Wells 5.0, renormalizations can be performed by depth criteria instead of date-time ordering.

Last updated