Updated Hadoop Compatible File System (markdown)

Chris Lu 2020-07-16 16:29:38 -07:00
parent cba3089788
commit 5b07cc4896

@ -72,6 +72,7 @@ Both reads and writes are working fine.
* Configure Hadoop to use SeaweedFS in `etc/hadoop/conf/core-site.xml`. `core-site.xml` resides on each node in the Hadoop cluster. You must add the same properties to each instance of `core-site.xml`. There are several properties to modify:
1. `fs.seaweedfs.impl`: This property defines the Seaweed HCFS implementation classes that are contained in the SeaweedFS HDFS client JAR. It is required.
1. `fs.defaultFS`: This property defines the default file system URI to use. It is optional if a path always has prefix `seaweedfs://localhost:8888`.
1. `fs.AbstractFileSystem.seaweedfs.impl`: Add the SeaweedFS implementation of Hadoop AbstractFileSystem to delegates to the existing SeaweedFS FileSystem and is only necessary for use with Hadoop 3.x.
```
<configuration>
@ -83,6 +84,10 @@ Both reads and writes are working fine.
<name>fs.defaultFS</name>
<value>seaweedfs://localhost:8888</value>
</property>
<property>
<name>fs.AbstractFileSystem.seaweedfs.impl</name>
<value>seaweed.hdfs.SeaweedAbstractFileSystem</value>
</property>
</configuration>
```
* Deploy the SeaweedFS HDFS client jar