Skip to content

FMEPRD-247 Feature Management Documentation Enhancements #11238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Aug 22, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Once your code is deployed, you can instantly turn on or off features for any in
FME provides visibility into your controlled releases by comparing data about feature flag evaluations with data about what happened after those evaluations. The data points that feed those comparisons are impressions and events. The results of those comparisons are called metrics.

### Impressions
An impression is a record of a targeting decision made. It is created automatically each time a feature flag is evaluated and contains details about the user or unique key for which the evaluation was performed, the targeting decision, the targeting rule that drove that decision, and a time stamp. Refer to the [Impressions](/docs/feature-management-experimentation/feature-management/impressions) guide for more information.
An impression is a record of a targeting decision made. It is created automatically each time a feature flag is evaluated and contains details about the user or unique key for which the evaluation was performed, the targeting decision, the targeting rule that drove that decision, and a time stamp. Refer to the [Impressions](/docs/feature-management-experimentation/feature-management/monitoring-analysis/impressions) guide for more information.

### Events
An event is a record of user or system behavior. Events can be as simple as a page visited, a button clicked, or response time observed, and as complex as a transaction record with a detailed list of properties. An event doesn’t refer to a feature flag. The association between flag evaluations and events is computed for you. An event, associated with a user (or other unique keys), arriving after a flag decision for that same unique key, is attributed to that evaluation by FME’s attribution engine.
Expand Down Expand Up @@ -91,13 +91,13 @@ Projects provide separation or partitioning of work to reduce clutter or to enfo
Within each project, you may have multiple environments, such as development, staging, and production. Refer to the [Environments](/docs/feature-management-experimentation/management-and-administration/fme-settings/environments) guide for more information.

### Feature flags
Feature flags are created at the project level where you specify the feature flag name, traffic type, owners, and description. Targeting rules are then created and managed at the environment level as part of the feature flag definition. Refer to the [Feature flag management](/docs/feature-management-experimentation/feature-management/create-a-feature-flag) guide for more information.
Feature flags are created at the project level where you specify the feature flag name, traffic type, owners, and description. Targeting rules are then created and managed at the environment level as part of the feature flag definition. Refer to the [Feature flag management](/docs/feature-management-experimentation/feature-management/setup/create-a-feature-flag) guide for more information.

### Targeting rule
Targeting rules for each feature flag are created at the environment level. For example, this supports one set of rules in your staging environment and another in production. Rules may be based on user or device attributes, membership in a segment, a percentage of a randomly distributed population, a list of individually specified user or unique key targets, or any combination of the above.

### Segment
A segment is a list of users or unique keys for targeting purposes. Segments are created at the environment level. Refer to the [Segments](/docs/feature-management-experimentation/feature-management/segments) guide for more information.
A segment is a list of users or unique keys for targeting purposes. Segments are created at the environment level. Refer to the [Segments](/docs/feature-management-experimentation/feature-management/targeting/segments) guide for more information.

### Traffic type
Targeting decisions are made on a per-user or per unique key basis, but what are the available types of unique keys you intend to target? These are your traffic types, and you can define up to ten unique key types at the project level.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
title: Explore How Feature Flags Affect User Targeting
sidebar_label: Explore How Feature Flags Affect User Targeting
description: Learn how to use this visualization tool to help you explore the effects of individual targeting, custom attributes, traffic allocation limits, and dynamic configurations on feature flags.
sidebar_position: 3
redirect_from:
- /docs/feature-management-experimentation/feature-management/best-practices/split-boxes-demo/
---

## Overview

The Split Boxes demo is a tool to help users understand the interaction between rules and the impact of various features. It’s a simple visualization that allows you to see the impact of individually targeting, custom attribution, limit exposure, and dynamic configuration.

## Using the Boxes Demo

Each box represents a user ID.

![](.././static/split-boxes-demo.png)

* You can individually target using the cell location, such as b8 or j5.
* You can also create a segment that includes any of the available values.
* You can create targeting rules using the attributes **row**, **col**, or **account**;

* _row_ and _col_ use letters and numbers respectively, usually with "is in list" as the matcher.
* Valid account names include: Nike, Apple, LinkedIn, Best Buy, Google, Microsoft, Pinterest, Dell, Slack, Zoom, Samsung, and Disney.

* You can modify the configuration of the treatments by updating any of the values. The `font_size` expects standard HTML sizes such as medium, large, x-large, etc.

## Setting up the Boxes Demo

There are three files attached:

* The HTML contains the SDK and can be run locally or on a server.
* You need to provide the browser API key for the Split environment where you will update the rollout plan.
* You also need to provide the feature flag name. These are entered as variables in the HTML:

```html
<script>
var splitAPIKey = "";
var splitName = "";
</script>
```

* The Boxes_split.txt file contains an example baseline definition of the feature flag.
* The feature flag can be created automatically using the `CreateBoxSplit.sh` script, which uses the Split Admin REST API and the `jq` tool. Run the script with this command line to create the feature flag and add definitions:

```css
CreateBoxSplit [Project Name] [Environment Name] [Traffic Type] [Split Name] [Admin API_KEY]
```

Example:

```sql
CreateBoxSplit Default Production user front_end_choose_boxes 9enxxxxxxxxxxxxxxxxxxxxxx
```

In Chrome, to see feature flag changes immediately, disable cache in the Network tab of the Developer Tools.

![](.././static/split-boxes-chrome.png)

## Downloads

| File | Size | Notes |
| --------------------------------------------------------- | --------- | --------------------------------- |
| [CreateBoxSplit.sh.zip](.././static/create-box-split.sh.zip) | 1 KB | |
| [Boxes\_split.txt](.././static/boxes-split.txt) | 658 Bytes | *Right-click > Save Link As...* |
| [Boxes.htm](.././static/boxes.htm) | 8 KB | *Right-click > Save Link As...* |


Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Build a Resilient Integration
sidebar_label: Build a Resilient Integration
description: Learn how to build a resilient integration with Harness FME.
sidebar_position: 7
sidebar_position: 8
---

## Overview
Expand Down Expand Up @@ -70,7 +70,7 @@ For client-side SDKs, you can set up a listener for the `SDK_READY_TIMED_OUT` ev

### Account for the control treatment

In case Split is unreachable and the SDK is unable to fetch feature flag definitions‚ and there is no cache available in the case of client side SDKs‚ any evaluations will return the [control treatment](/docs/feature-management-experimentation/feature-management/control-treatment).
In case Split is unreachable and the SDK is unable to fetch feature flag definitions‚ and there is no cache available in the case of client side SDKs‚ any evaluations will return the [control treatment](/docs/feature-management-experimentation/feature-management/setup/control-treatment).

Make sure you code your application so that it is able to safely handle this situation. This may mean falling back to a safe behavior, such as turning an experimental feature off.

Expand All @@ -83,7 +83,7 @@ In the status page, you will see the status for Split's various components:
* **SDK API**: This API serves rollout plans for our SDKs. A service disruption could prevent new SDK instances from initializing if your rollout plans aren't yet cached in Split's CDN as a result of previous requests by other SDK instances, and running SDK instances would not be able to fetch rollout plan changes.
* **API**: Split's public API.
* **Web console**: A web console outage means users can't log into Split's web application to modify rollout plans. Existing rollouts plans will continue to be served to Split SDKs in your applications and you end users' experience will be unaffected.
* **Data processing**: Issues in this component will impact Split's ability to ingest [impression](/docs/feature-management-experimentation/feature-management/impressions) and [event data](/docs/feature-management-experimentation/release-monitoring/events/). This will also affect Live tail, alerts, experimentation, and impression webhooks. SDKs mitigate issues in our data processing pipeline by following a retry mechanism when they fail to post data back to Split. Data processing issues are usually temporal, delaying ingestion and rarely resulting in data loss.
* **Data processing**: Issues in this component will impact Split's ability to ingest [impression](/docs/feature-management-experimentation/feature-management/monitoring-analysis/impressions) and [event data](/docs/feature-management-experimentation/release-monitoring/events/). This will also affect Live tail, alerts, experimentation, and impression webhooks. SDKs mitigate issues in our data processing pipeline by following a retry mechanism when they fail to post data back to Split. Data processing issues are usually temporal, delaying ingestion and rarely resulting in data loss.
* **Integrations**: This component will reflect issues with any of Split's various [integrations](/docs/feature-management-experimentation/integrations). The integrations affected will be noted in the status page update and if the issue lies with an integration partner it will be noted and tracked.
* **CDN**: Issues with our CDN may prevent new SDK instances from fetching rollout plans during initialization. For server-side SDKs, this will result in treatment evaluations returning control treatments. For client-side SDKs, evaluations will return treatments according to the rollout plans already cached in the device, if a cache is available.
* **Streaming Authentication Service**: Issues in this component will prevent SDKs from receiving rollout plan updates via push notifications. In this scenario, all SDKs will fall back to polling to fetch updates with no impact to your end users.
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Select the traffic type user to get started. Traffic types give you the ability

Owners, tags, and the description make it easy to sort, filter, and locate the features your team is rolling out. By default, administrators and the creator of the feature flag are considered its owners. Utilize groups with this owner field to organize your flags across your team. Learn more about owners and tags.

For more information about creating a feature flag, see [Create a feature flag](/docs/feature-management-experimentation/feature-management/create-a-feature-flag/).
For more information about creating a feature flag, see [Create a feature flag](/docs/feature-management-experimentation/feature-management/setup/create-a-feature-flag/).

## Add your feature flag to an environment

Expand All @@ -31,15 +31,15 @@ To configure your feature flag for a particular environment, select the environm

Treatments are the different variants or versions of your feature flag that you serve to your users. When you click the **Initiate environment** button, the Definition tab for a particular flag appears. Use this tab to assign your treatments.

We default the treatment names to on and off for each new feature flag but you can edit these names and add additional treatments. The default treatment selected in the treatments section will be served to everyone if the feature flag is killed and to all traffic not exposed to a flag. For more information about the default treatment, see [Set the default treatment](/docs/feature-management-experimentation/feature-management/set-the-default-treatment).
We default the treatment names to on and off for each new feature flag but you can edit these names and add additional treatments. The default treatment selected in the treatments section will be served to everyone if the feature flag is killed and to all traffic not exposed to a flag. For more information about the default treatment, see [Set the default treatment](/docs/feature-management-experimentation/feature-management/setup/default-treatment).

After your treatments are set up and the default treatment is chosen, you can set individual targets, limit exposure, and set targeting rules to explicitly assign treatments or set targets based on dependencies or demographic data as attributes.

* Set individual targets: Allows you to explicitly assign individual users or groups of users to one treatment.
* Limit exposure (advanced): Allows you to randomly assign a percentage of your users to be evaluated by all the targeting rules that are not individual targets. This feature is recommended for advanced experimentation use cases.
* Set targeting rules: Allows you to build out if/else statements to use demographic data as attributes to assign treatments to users and  build dependencies with other features you manage in Split.

Click **Save changes** to configure the rules for this feature flag in the environment you selected. Learn more about [targeting](/docs/feature-management-experimentation/feature-management/define-feature-flag-treatments-and-targeting).
Click **Save changes** to configure the rules for this feature flag in the environment you selected. Learn more about [targeting](/docs/feature-management-experimentation/feature-management/setup/define-feature-flag-treatments-and-targeting).

Now that you've set up this feature flag's targeting rules, let's take a look at how this flag would work in your code. To implement this feature flag, copy and wrap the provided code snippet around your feature's treatments.

Expand All @@ -58,6 +58,12 @@ if (treatment === 'on') {
}
```

To help verify that treatments are being served to your users, click the [Live tail tab](/docs/feature-management-experimentation/feature-management/live-tail/) to view a stream of impressions or SDK evaluations. Impressions occur whenever a visitor is assigned a treatment (i.e., variations) for a feature flag.
To help verify that treatments are being served to your users, click the [Live tail tab](/docs/feature-management-experimentation/feature-management/monitoring-analysis/live-tail/) to view a stream of impressions or SDK evaluations. Impressions occur whenever a visitor is assigned a treatment (i.e., variations) for a feature flag.

These impressions are generated by the SDKs each time `getTreatment` is called. They are periodically sent back to Split's servers where they are stored and can be accessed for later use. For more information about impressions, see [Impressions](/docs/feature-management-experimentation/feature-management/impressions).
These impressions are generated by the SDKs each time `getTreatment` is called. They are periodically sent back to Split's servers where they are stored and can be accessed for later use. For more information about impressions, see [Impressions](/docs/feature-management-experimentation/feature-management/monitoring-analysis/impressions).

## Further Reading

Additional documentation, blog links, and articles:

- [Feature Flags For Dummies](https://www.harness.io/resources/feature-flags-for-dummies)
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Create a Metric Alert Policy
sidebar_position: 5
sidebar_position: 6
---

import AlertPolicies from '/docs/feature-management-experimentation/shared/alert-policies/index.mdx'
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Create a Metric
sidebar_position: 4
sidebar_position: 5
---

import MetricsSetup from '/docs/feature-management-experimentation/shared/metrics/setup/index.mdx'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@
title: Create an Experiment
sidebar_label: Create an Experiment
description: Learn how to create an experiment in Harness FME.
sidebar_position: 6
sidebar_position: 7
redirect_from:
- /docs/feature-management-experimentation/feature-management/faqs/is-there-a-way-to-limit-the-number-of-users-in-an-experiment
---

## Overview
Expand Down Expand Up @@ -33,6 +35,19 @@ To create an A/B test in Harness FME:
1. Apply tags to help categorize your experiment (for example, by team, status, or feature area).
1. Click **Save**.

### Limiting the number of users in an experiment

You can't directly cap the total number of users who will participate in an experiment. However, you can use the **Limit exposure** option to control the percentage of eligible users who are exposed to the experiment at any given time.

This approach lets you:

- Reduce risk when rolling out changes.
- Gather results from a smaller sample before expanding to all users.

:::tip
Start with a lower exposure percentage (for example, 10%) to validate results, then increase it gradually.
:::

## View experiment results

Once your experiment is running, Harness FME automatically tracks key metrics and monitors the statistical significance of the results.
Expand Down
Loading