adjust outside cluster mode, HCFS 1.5.0

Chris Lu 2020-10-11 21:35:30 -07:00
parent dfe3f19dbb
commit 126c84cc15
4 changed files with 16 additions and 16 deletions

@ -29,13 +29,13 @@ To unmount, just shut it down the "weed mount".
#### Mount outside of a SeaweedFS cluster
Besides connecting to filer server, `weed mount` also directly connects to volume servers directly for better performance. However, if the SeaweedFS cluster is started by Kubernetes or docker-compose, the volume servers only knows its own IP addresses inside the cluster, which are not accessible by `weed mount`.
In addition to connecting to filer server, `weed mount` also directly connects to volume servers directly for better performance.
However, if the SeaweedFS cluster is started by Kubernetes or docker-compose and
the volume servers only knows its own IP addresses inside the cluster,
`weed mount` is not able to access the volume servers from outside of the cluster.
`weed mount -outsideContainerClusterMode` option can help here. It assumes:
* All volume server containers are accessible through the same hostname or IP address as the filer.
* All volume server container ports are open external to the cluster.
So the `weed mount -outsideContainerClusterMode -filer=<filerHostname:filerPort>` will use the filer server's hostname to replace volume servers' hostname, but keeping the volume servers' port number unchanged.
`weed mount -outsideContainerClusterMode` option can help here. It assumes all volume server containers are accessible
through the `publicUrl` address when starting with `weed volume -publicUrl=xxx`.
### Weed Mount Architecture

@ -26,7 +26,7 @@ Then get the seaweedfs hadoop client jar.
```
cd share/hadoop/common/lib/
wget https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.4.9/seaweedfs-hadoop2-client-1.4.9.jar
wget https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.5.0/seaweedfs-hadoop2-client-1.5.0.jar
```
# TestDFSIO Benchmark

@ -23,7 +23,7 @@ Maven
<dependency>
<groupId>com.github.chrislusf</groupId>
<artifactId>seaweedfs-hadoop3-client</artifactId>
<version>1.4.9</version>
<version>1.5.0</version>
</dependency>
or
@ -31,16 +31,16 @@ or
<dependency>
<groupId>com.github.chrislusf</groupId>
<artifactId>seaweedfs-hadoop2-client</artifactId>
<version>1.4.9</version>
<version>1.5.0</version>
</dependency>
```
Or you can download the latest version from MavenCentral
* https://mvnrepository.com/artifact/com.github.chrislusf/seaweedfs-hadoop2-client
* [seaweedfs-hadoop2-client-1.4.9.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.4.9/seaweedfs-hadoop2-client-1.4.9.jar)
* [seaweedfs-hadoop2-client-1.5.0.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.5.0/seaweedfs-hadoop2-client-1.5.0.jar)
* https://mvnrepository.com/artifact/com.github.chrislusf/seaweedfs-hadoop3-client
* [seaweedfs-hadoop3-client-1.4.9.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop3-client/1.4.9/seaweedfs-hadoop3-client-1.4.9.jar)
* [seaweedfs-hadoop3-client-1.5.0.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop3-client/1.5.0/seaweedfs-hadoop3-client-1.5.0.jar)
# Test SeaweedFS on Hadoop

@ -11,12 +11,12 @@ To make these files visible to Spark, set HADOOP_CONF_DIR in $SPARK_HOME/conf/sp
## installation not inheriting from Hadoop cluster configuration
Copy the seaweedfs-hadoop2-client-1.4.9.jar to all executor machines.
Copy the seaweedfs-hadoop2-client-1.5.0.jar to all executor machines.
Add the following to spark/conf/spark-defaults.conf on every node running Spark
```
spark.driver.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.4.9.jar
spark.executor.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.4.9.jar
spark.driver.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.5.0.jar
spark.executor.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.5.0.jar
```
And modify the configuration at runtime:
@ -37,8 +37,8 @@ And modify the configuration at runtime:
1. change the spark-defaults.conf
```
spark.driver.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.4.9.jar
spark.executor.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.4.9.jar
spark.driver.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.5.0.jar
spark.executor.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.5.0.jar
spark.hadoop.fs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem
```