From 5b07cc48962b32a1d640357f2dc827180ccb4d92 Mon Sep 17 00:00:00 2001 From: Chris Lu Date: Thu, 16 Jul 2020 16:29:38 -0700 Subject: [PATCH] Updated Hadoop Compatible File System (markdown) --- Hadoop-Compatible-File-System.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/Hadoop-Compatible-File-System.md b/Hadoop-Compatible-File-System.md index 348b9d3..3e76363 100644 --- a/Hadoop-Compatible-File-System.md +++ b/Hadoop-Compatible-File-System.md @@ -72,6 +72,7 @@ Both reads and writes are working fine. * Configure Hadoop to use SeaweedFS in `etc/hadoop/conf/core-site.xml`. `core-site.xml` resides on each node in the Hadoop cluster. You must add the same properties to each instance of `core-site.xml`. There are several properties to modify: 1. `fs.seaweedfs.impl`: This property defines the Seaweed HCFS implementation classes that are contained in the SeaweedFS HDFS client JAR. It is required. 1. `fs.defaultFS`: This property defines the default file system URI to use. It is optional if a path always has prefix `seaweedfs://localhost:8888`. + 1. `fs.AbstractFileSystem.seaweedfs.impl`: Add the SeaweedFS implementation of Hadoop AbstractFileSystem to delegates to the existing SeaweedFS FileSystem and is only necessary for use with Hadoop 3.x. ``` @@ -83,6 +84,10 @@ Both reads and writes are working fine. fs.defaultFS seaweedfs://localhost:8888 + + fs.AbstractFileSystem.seaweedfs.impl + seaweed.hdfs.SeaweedAbstractFileSystem + ``` * Deploy the SeaweedFS HDFS client jar