Skip to content

Commit 8f7e417

Browse files
new doc: result-cache.md, refresh-cache-run-query.md and secondary-index.md doc update: delete.md, systemd.md, explain-analyze-query.md, full-text-search.md, environment-value.md file (#255)
* new doc: secondary-index.md update: delete.md, systemd.md, explain-analyze-query.md, full-text-search.md, environment-value.md file * new doc: refresh-cache-run-query.md
1 parent 4ae57b0 commit 8f7e417

File tree

17 files changed

+520
-46
lines changed

17 files changed

+520
-46
lines changed

docs/api/stream/delete.md

Lines changed: 294 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -3,46 +3,322 @@ description: >-
33
Delete OpenObserve streams via API. Deletion is async and handled by the
44
compactor. Configure auto-deletion with data retention environment settings.
55
---
6-
# Delete stream
6+
## Delete stream
7+
OpenObserve provides multiple deletion strategies to manage your data lifecycle: immediate complete stream deletion, targeted time-range deletion with job tracking, and automatic retention-based cleanup.
78

8-
Endpoint: `DELETE /api/{organization}/streams/{stream}?type={StreamType}`
9+
## Overview
10+
The Delete Stream API allows you to:
911

10-
## Request
12+
- Delete an entire stream and all its data
13+
- Delete data within a specific time period with job tracking
14+
- Monitor deletion job progress across clusters
15+
- Manage cached query results
16+
All deletion operations are asynchronous and processed by the Compactor service.
1117

12-
- type: logs / metrics / traces
18+
## Base URL
19+
`https://example.remote.dev/`
20+
Replace `example.remote.dev` with your actual OpenObserve instance URL.
1321

14-
default is `logs`.
22+
## Content type
23+
All requests and responses use JSON format.
24+
```
25+
Content-Type: application/json
26+
```
27+
## Endpoints
28+
29+
### Delete entire stream
30+
Delete a complete stream and all associated data.
1531

16-
## Response
32+
#### Request
33+
**Method**: `DELETE` <br>
34+
**Path**: `/api/{org_id}/streams/{stream_name}?type=logs&delete_all=true` <br>
35+
**Parameters**:
36+
| Name | Type | Location | Required | Description |
37+
|------|------|----------|----------|-------------|
38+
| org_id | string | path | Yes | Organization identifier |
39+
| stream_name | string | path | Yes | Name of the stream to delete |
40+
| type | string | query | Yes | Stream type: `logs`, `metrics`, or `traces` |
41+
| delete_all | boolean | path | Yes | Delete all related resources like alerts and dashboards |
1742

43+
#### Request example
44+
```bash
45+
curl -X 'DELETE' \
46+
'https://example.remote.dev/api/default/streams/pii_test?type=logs&delete_all=true' \
47+
-H 'accept: application/json'
48+
```
49+
50+
#### Response
51+
**Status Code:** `200 OK`
1852
```json
1953
{
20-
"code": 200,
21-
"message": "stream deleted"
54+
"code": 200,
55+
"message": "stream deleted"
2256
}
2357
```
2458

25-
The data delete is an asynchronous operation. it will delete by `Compactor`.
59+
#### Response fields
60+
| Field | Type | Description |
61+
|-------|------|-------------|
62+
| code | integer | HTTP status code |
63+
| message | string | Confirmation message |
64+
65+
#### Status codes
66+
| Code | Meaning |
67+
|------|---------|
68+
| 200 | Stream deleted successfully |
69+
| 400 | Invalid parameters |
70+
| 404 | Stream not found |
71+
| 500 | Internal server error |
72+
73+
#### Behavior
74+
Deletion is asynchronous and does not happen immediately:
75+
76+
1. When you call this API, the deletion request is marked in the system.
77+
2. The API responds immediately, you do not wait for actual deletion.
78+
3. A background service called Compactor checks for pending deletions every 10 minutes.
79+
4. When Compactor runs, it starts deleting your stream. This can take anywhere from seconds to several minutes depending on how much data the stream contains.
80+
5. In the worst-case scenario (if you request deletion just before Compactor runs), the entire process could take up to 30 minutes total.
81+
6. You do not need to wait. The deletion happens in the background. You can check the stream status later to confirm it has been deleted.
82+
83+
!!! note "Notes"
84+
85+
- This operation cannot be undone.
86+
- Data is deleted from both the `file_list` table and object store.
87+
- No job tracking is available for this endpoint
88+
89+
!!! note "Environment variables"
90+
- You can change the `compactor` run interval: `ZO_COMPACT_INTERVAL=600`. Unit is second. default is `10 minutes`.
91+
- You can configure data life cycle to auto delete old data: `ZO_COMPACT_DATA_RETENTION_DAYS=30`. The system will auto delete the data after `30` days. Note that the value must be greater than `0`.
2692

27-
> it will execute by an interval `10 minutes` as default. So maybe the data will delete after 30 minutes. You don't need to wait it done, you can confirm the delete result hours later.
93+
### Delete stream data by time range
94+
Delete stream data within a specific time period with job tracking.
2895

29-
You can change the `compactor` run interval by an environment:
96+
#### Request
97+
**Method:** `DELETE`
98+
<br>
99+
**Path:** `/api/{org_id}/streams/{stream_name}/data_by_time_range?start=<start_ts>&end=<end_ts>`
30100

101+
#### Parameters
102+
| Parameter | Type | Location | Description |
103+
|-----------|------|----------|-------------|
104+
| `org_id` | string | Path | Organization identifier |
105+
| `stream_name` | string | Path | Name of the stream |
106+
| `start` | long | path | Start timestamp in microseconds (UTC). Inclusive. |
107+
| `end` | long | path | End timestamp in microseconds (UTC). Inclusive. |
108+
#### Request example
109+
```bash
110+
curl -X DELETE \
111+
'https://example.remote.dev/api/default/streams/test_stream/data_by_time_range?start=1748736000000000&end=1751241600000000'
31112
```
32-
ZO_COMPACT_INTERVAL=600
113+
#### Response
114+
**Status Code:** `200 OK`
115+
```json
116+
{
117+
"id": "30ernyKEEMznL8KIXEaZhmDYRR9"
118+
}
33119
```
120+
#### Response fields
121+
| Field | Type | Description |
122+
|-------|------|-------------|
123+
| `id` | string | Unique job ID for tracking deletion progress |
124+
#### Status codes
125+
| Code | Meaning |
126+
|------|---------|
127+
| 200 | Deletion job created successfully |
128+
| 400 | Invalid parameters (For example, invalid timestamp format) |
129+
| 404 | Stream not found |
130+
#### Behavior
131+
- Initiates a compaction delete job.
132+
- Returns a job ID that can be used to track progress.
133+
- Deletes data from:
34134

35-
Unit is second. default is `10 minutes`.
135+
- `file_list` table
136+
- Object store (for example, S3)
137+
- Granularity:
36138

139+
- **Logs:** Data is deleted every hour.
140+
- **Traces:** Data is deleted daily.
141+
---
37142

38-
## Data lifecycle
143+
### Get delete job status
144+
Check the status of a time-range deletion job.
39145

40-
You can configure data life cycle to auto delete old data. it is a awesome feature.
146+
#### Request
147+
**Method:** `GET`
148+
<br>
149+
**Path:** `/api/{org_id}/streams/{stream_name}/data_by_time_range/status/{id}`
41150

42-
To enable this feature, you just need to add an environment:
151+
#### Parameters
152+
| Parameter | Type | Location | Description |
153+
|-----------|------|----------|-------------|
154+
| `org_id` | string | Path | Organization identifier |
155+
| `stream_name` | string | Path | Name of the stream |
156+
| `id` | string | Path | Job ID returned from deletion request |
43157

158+
#### Request example
159+
```bash
160+
curl -X GET \
161+
'https://example.remote.dev/api/default/streams/test_stream/data_by_time_range/status/30ernyKEEMznL8KIXEaZhmDYRR9'
44162
```
45-
ZO_COMPACT_DATA_RETENTION_DAYS=30
163+
164+
#### Response: Completed
165+
**Status Code:** `200 OK`
166+
```json
167+
{
168+
"id": "30f080gLbU4i21VpY2O3YzwrKDH",
169+
"status": "Completed",
170+
"metadata": [
171+
{
172+
"cluster": "dev3",
173+
"region": "us-test-3",
174+
"id": "30f080gLbU4i21VpY2O3YzwrKDH",
175+
"key": "default/logs/delete_d3/2025-07-27T04:00:00Z,2025-07-28T04:00:00Z",
176+
"created_at": 1754003156467113,
177+
"ended_at": 1754003356516415,
178+
"status": "Completed"
179+
},
180+
{
181+
"cluster": "dev4",
182+
"region": "us-test-4",
183+
"id": "30f080gLbU4i21VpY2O3YzwrKDH",
184+
"key": "default/logs/delete_d3/2025-07-27T04:00:00Z,2025-07-28T04:00:00Z",
185+
"created_at": 1754003156467113,
186+
"ended_at": 1754003326523177,
187+
"status": "Completed"
188+
}
189+
]
190+
}
46191
```
192+
#### Response: Pending
193+
**Status Code:** `200 OK`
194+
```json
195+
{
196+
"id": "30f080gLbU4i21VpY2O3YzwrKDH",
197+
"status": "Pending",
198+
"metadata": [
199+
{
200+
"cluster": "dev3",
201+
"region": "us-test-3",
202+
"id": "30f080gLbU4i21VpY2O3YzwrKDH",
203+
"key": "default/logs/delete_d3/2025-07-27T04:00:00Z,2025-07-28T04:00:00Z",
204+
"created_at": 1754003156467113,
205+
"ended_at": 0,
206+
"status": "Pending"
207+
},
208+
{
209+
"cluster": "dev4",
210+
"region": "us-test-4",
211+
"id": "30f080gLbU4i21VpY2O3YzwrKDH",
212+
"key": "default/logs/delete_d3/2025-07-27T04:00:00Z,2025-07-28T04:00:00Z",
213+
"created_at": 1754003156467113,
214+
"ended_at": 0,
215+
"status": "Pending"
216+
}
217+
]
218+
}
219+
```
220+
#### Response: With Errors
221+
**Status Code:** `200 OK`
222+
```json
223+
{
224+
"id": "30fCWBSNWwTWnRJE0weFfDIc3zz",
225+
"status": "Pending",
226+
"metadata": [
227+
{
228+
"cluster": "dev4",
229+
"region": "us-test-4",
230+
"id": "30fCWBSNWwTWnRJE0weFfDIc3zz",
231+
"key": "default/logs/delete_d4/2025-07-21T14:00:00Z,2025-07-22T00:00:00Z",
232+
"created_at": 1754009269552227,
233+
"ended_at": 1754009558553845,
234+
"status": "Completed"
235+
}
236+
],
237+
"errors": [
238+
{
239+
"cluster": "dev3",
240+
"error": "Error getting delete job status from cluster node: Status { code: Internal, message: \"Database error: DbError# SeaORMError# job not found\", metadata: MetadataMap { headers: {\"content-type\": \"application/grpc\", \"date\": \"Fri, 01 Aug 2025 00:58:01 GMT\", \"content-length\": \"0\"} }, source: None }",
241+
"region": "us-test-3"
242+
}
243+
]
244+
}
245+
```
246+
#### Response fields
247+
| Field | Type | Description |
248+
|-------|------|-------------|
249+
| `id` | string | Job identifier |
250+
| `status` | string | Overall job status: `Completed` or `Pending` |
251+
| `metadata` | array | Array of per-cluster deletion details |
252+
| `metadata[].cluster` | string | Cluster identifier |
253+
| `metadata[].region` | string | Region/zone identifier |
254+
| `metadata[].id` | string | Job ID |
255+
| `metadata[].key` | string | Database key for the deletion operation |
256+
| `metadata[].created_at` | long | Job creation timestamp in microseconds |
257+
| `metadata[].ended_at` | long | Job completion timestamp in microseconds (0 if still pending) |
258+
| `metadata[].status` | string | Individual cluster deletion status |
259+
| `errors` | array | Array of errors from specific clusters (if any) |
260+
| `errors[].cluster` | string | Cluster where error occurred |
261+
| `errors[].region` | string | Region identifier |
262+
| `errors[].error` | string | Error message |
263+
264+
#### Status Codes
265+
| Code | Meaning |
266+
|------|---------|
267+
| 200 | Status retrieved successfully |
268+
| 404 | Job ID not found |
269+
270+
#### Behavior
271+
- Returns current status of deletion job
272+
- Shows progress across all clusters in distributed setup
273+
- Shows error details if any cluster encountered failures
274+
- Status of `Pending` means deletion is still in progress
275+
- Status of `Completed` means all clusters finished deletion
276+
---
277+
278+
### Delete cache results
279+
Delete cached query results for a stream.
280+
#### Request
281+
**Method:** `DELETE`
282+
<br>
283+
**Path:** `/api/{org_id}/streams/{stream_name}/cache/status/results?type=<stream_type>&ts=<timestamp>`
284+
285+
### Parameters
286+
| Parameter | Type | Location | Description |
287+
|-----------|------|----------|-------------|
288+
| `org_id` | string | path | Organization identifier |
289+
| `stream_name` | string | Path | Stream name (use `_all` to delete cache for all streams) |
290+
| `type` | string | path | Stream type: `logs`, `metrics`, or `traces` |
291+
| `ts` | long | path | Timestamp threshold in microseconds. Deletes cache from start up to this timestamp. Retains cache from timestamp onwards. |
292+
293+
#### Request example
294+
```bash
295+
curl -X DELETE \
296+
'https://example.remote.dev/api/default/streams/test_stream/_all/cache/results?type=logs&ts=1753849800000'
297+
```
298+
299+
### Response
300+
**Status Code:** `200 OK`
301+
```json
302+
{
303+
"code": 200,
304+
"message": "cache deleted"
305+
}
306+
```
307+
308+
### Response Fields
309+
| Field | Type | Description |
310+
|-------|------|-------------|
311+
| `code` | integer | HTTP status code |
312+
| `message` | string | Confirmation message |
313+
314+
### Status Codes
315+
| Code | Meaning |
316+
|------|---------|
317+
| 200 | Cache deleted successfully |
318+
| 400 | Invalid parameters |
319+
| 404 | Stream not found |
47320

48-
The value have to greater than `0`, and it is means how many days you want to keep data. it will auto delete the data older than `N` days.
321+
### Behavior
322+
- Accepts `ts` (timestamp) query parameter in microseconds
323+
- Deletes cache from `cache_start` up to the given `ts`
324+
- Retains cache from `ts` onwards

0 commit comments

Comments
 (0)