diff --git a/FUSE-Mount.md b/FUSE-Mount.md index 53a0eb5..4d3d0f2 100644 --- a/FUSE-Mount.md +++ b/FUSE-Mount.md @@ -29,13 +29,13 @@ To unmount, just shut it down the "weed mount". #### Mount outside of a SeaweedFS cluster -Besides connecting to filer server, `weed mount` also directly connects to volume servers directly for better performance. However, if the SeaweedFS cluster is started by Kubernetes or docker-compose, the volume servers only knows its own IP addresses inside the cluster, which are not accessible by `weed mount`. +In addition to connecting to filer server, `weed mount` also directly connects to volume servers directly for better performance. +However, if the SeaweedFS cluster is started by Kubernetes or docker-compose and +the volume servers only knows its own IP addresses inside the cluster, +`weed mount` is not able to access the volume servers from outside of the cluster. -`weed mount -outsideContainerClusterMode` option can help here. It assumes: - * All volume server containers are accessible through the same hostname or IP address as the filer. - * All volume server container ports are open external to the cluster. - -So the `weed mount -outsideContainerClusterMode -filer=` will use the filer server's hostname to replace volume servers' hostname, but keeping the volume servers' port number unchanged. +`weed mount -outsideContainerClusterMode` option can help here. It assumes all volume server containers are accessible +through the `publicUrl` address when starting with `weed volume -publicUrl=xxx`. ### Weed Mount Architecture diff --git a/Hadoop-Benchmark.md b/Hadoop-Benchmark.md index 795c1f3..85989bd 100644 --- a/Hadoop-Benchmark.md +++ b/Hadoop-Benchmark.md @@ -26,7 +26,7 @@ Then get the seaweedfs hadoop client jar. ``` cd share/hadoop/common/lib/ -wget https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.4.9/seaweedfs-hadoop2-client-1.4.9.jar +wget https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.5.0/seaweedfs-hadoop2-client-1.5.0.jar ``` # TestDFSIO Benchmark diff --git a/Hadoop-Compatible-File-System.md b/Hadoop-Compatible-File-System.md index e1f7582..fe5d4d3 100644 --- a/Hadoop-Compatible-File-System.md +++ b/Hadoop-Compatible-File-System.md @@ -23,7 +23,7 @@ Maven com.github.chrislusf seaweedfs-hadoop3-client - 1.4.9 + 1.5.0 or @@ -31,16 +31,16 @@ or com.github.chrislusf seaweedfs-hadoop2-client - 1.4.9 + 1.5.0 ``` Or you can download the latest version from MavenCentral * https://mvnrepository.com/artifact/com.github.chrislusf/seaweedfs-hadoop2-client - * [seaweedfs-hadoop2-client-1.4.9.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.4.9/seaweedfs-hadoop2-client-1.4.9.jar) + * [seaweedfs-hadoop2-client-1.5.0.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop2-client/1.5.0/seaweedfs-hadoop2-client-1.5.0.jar) * https://mvnrepository.com/artifact/com.github.chrislusf/seaweedfs-hadoop3-client - * [seaweedfs-hadoop3-client-1.4.9.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop3-client/1.4.9/seaweedfs-hadoop3-client-1.4.9.jar) + * [seaweedfs-hadoop3-client-1.5.0.jar](https://oss.sonatype.org/service/local/repositories/releases/content/com/github/chrislusf/seaweedfs-hadoop3-client/1.5.0/seaweedfs-hadoop3-client-1.5.0.jar) # Test SeaweedFS on Hadoop diff --git a/run-Spark-on-SeaweedFS.md b/run-Spark-on-SeaweedFS.md index 6f8081c..89de187 100644 --- a/run-Spark-on-SeaweedFS.md +++ b/run-Spark-on-SeaweedFS.md @@ -11,12 +11,12 @@ To make these files visible to Spark, set HADOOP_CONF_DIR in $SPARK_HOME/conf/sp ## installation not inheriting from Hadoop cluster configuration -Copy the seaweedfs-hadoop2-client-1.4.9.jar to all executor machines. +Copy the seaweedfs-hadoop2-client-1.5.0.jar to all executor machines. Add the following to spark/conf/spark-defaults.conf on every node running Spark ``` -spark.driver.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.4.9.jar -spark.executor.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.4.9.jar +spark.driver.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.5.0.jar +spark.executor.extraClassPath=/path/to/seaweedfs-hadoop2-client-1.5.0.jar ``` And modify the configuration at runtime: @@ -37,8 +37,8 @@ And modify the configuration at runtime: 1. change the spark-defaults.conf ``` -spark.driver.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.4.9.jar -spark.executor.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.4.9.jar +spark.driver.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.5.0.jar +spark.executor.extraClassPath=/Users/chris/go/src/github.com/chrislusf/seaweedfs/other/java/hdfs2/target/seaweedfs-hadoop2-client-1.5.0.jar spark.hadoop.fs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem ```