From 92e508ac3d6f9d9e60ef7b060e771d630b56ea78 Mon Sep 17 00:00:00 2001 From: Chris Lu Date: Fri, 13 Nov 2020 13:46:10 -0800 Subject: [PATCH] Updated Filer Change Data Capture (markdown) --- Filer-Change-Data-Capture.md | 1 + 1 file changed, 1 insertion(+) diff --git a/Filer-Change-Data-Capture.md b/Filer-Change-Data-Capture.md index d9d9f5a..a72e5b1 100644 --- a/Filer-Change-Data-Capture.md +++ b/Filer-Change-Data-Capture.md @@ -38,3 +38,4 @@ This is basically stream processing or event processing for files. The possible * A **job queue**: upload files to a folder, and processing new files as soon as possible, and delete the processed files. * Do-it-yourself **Data Replication or Backup**. * **Batch processing**: streaming data is cool, but sometimes batching is more efficient. To combine streaming and batching, you can put one batch of new data as a file and trigger the batch processing on that file. +* Folder size statistics.