Updated weed shell (markdown)

Konstantin Lebedev 2023-02-13 14:24:43 +05:00
parent 88f8cec797
commit 58b04cb505

@ -3,63 +3,74 @@
``` ```
$ weed shell $ weed shell
> help > help
Type: "help <command>" for help on <command>. Most commands support "<command> -h" also for options. Type: "help <command>" for help on <command>. Most commands support "<command> -h" also for options.
cluster.ps # check current cluster process status cluster.check # check current cluster network connectivity
collection.delete # delete specified collection cluster.ps # check current cluster process status
collection.list # list all collections cluster.raft.add # add a server to the raft cluster
ec.balance # balance all ec shards among all racks and volume servers cluster.raft.ps # check current raft cluster status
ec.decode # decode a erasure coded volume into a normal volume cluster.raft.remove # remove a server from the raft cluster
ec.encode # apply erasure coding to a volume collection.delete # delete specified collection
ec.rebuild # find and rebuild missing ec shards among volume servers collection.list # list all collections
fs.cat # stream the file content on to the screen ec.balance # balance all ec shards among all racks and volume servers
fs.cd # change directory to a directory /path/to/dir ec.decode # decode a erasure coded volume into a normal volume
fs.configure # configure and apply storage options for each location ec.encode # apply erasure coding to a volume
fs.du # show disk usage ec.rebuild # find and rebuild missing ec shards among volume servers
fs.ls # list all files under a directory fs.cat # stream the file content on to the screen
fs.meta.cat # print out the meta data content for a file or directory fs.cd # change directory to a directory /path/to/dir
fs.meta.load # load saved filer meta data to restore the directory and file structure fs.configure # configure and apply storage options for each location
fs.meta.notify # recursively send directory and file meta data to notifiction message queue fs.du # show disk usage
fs.meta.save # save all directory and file meta data to a local file for metadata backup. fs.ls # list all files under a directory
fs.mkdir # create a directory fs.meta.cat # print out the meta data content for a file or directory
fs.mv # move or rename a file or a folder fs.meta.changeVolumeId # change volume id in existing metadata.
fs.pwd # print out current directory fs.meta.load # load saved filer meta data to restore the directory and file structure
fs.rm # remove file and directory entries fs.meta.notify # recursively send directory and file meta data to notification message queue
fs.tree # recursively list all files under a directory fs.meta.save # save all directory and file meta data to a local file for metadata backup.
lock # lock in order to exclusively manage the cluster fs.mkdir # create a directory
remote.cache # cache the file content for mounted directories or files fs.mv # move or rename a file or a folder
remote.configure # remote storage configuration fs.pwd # print out current directory
remote.meta.sync # synchronize the local file meta data with the remote file metadata fs.rm # remove file and directory entries
remote.mount # mount remote storage and pull its metadata fs.tree # recursively list all files under a directory
remote.mount.buckets # mount all buckets in remote storage and pull its metadata fs.verify # recursively verify all files under a directory
remote.uncache # keep the metadata but remote cache the file content for mounted directories or files lock # lock in order to exclusively manage the cluster
remote.unmount # unmount remote storage mount.configure # configure the mount on current server
s3.bucket.create # create a bucket with a given name mq.topic.list # print out all topics
s3.bucket.delete # delete a bucket by a given name remote.cache # cache the file content for mounted directories or files
s3.bucket.list # list all buckets remote.configure # remote storage configuration
s3.bucket.quota # set/remove/enable/disable quota for a bucket remote.meta.sync # synchronize the local file meta data with the remote file metadata
s3.bucket.quota.enforce # check quota for all buckets, make the bucket read only if over the limit remote.mount # mount remote storage and pull its metadata
s3.clean.uploads # clean up stale multipart uploads remote.mount.buckets # mount all buckets in remote storage and pull its metadata
s3.configure # configure and apply s3 options for each bucket remote.uncache # keep the metadata but remote cache the file content for mounted directories or files
unlock # unlock the cluster-wide lock remote.unmount # unmount remote storage
volume.balance # balance all volumes among volume servers s3.bucket.create # create a bucket with a given name
volume.check.disk # check all replicated volumes to find and fix inconsistencies. It is optional and resource intensive. s3.bucket.delete # delete a bucket by a given name
volume.configure.replication # change volume replication value s3.bucket.list # list all buckets
volume.copy # copy a volume from one volume server to another volume server s3.bucket.quota # set/remove/enable/disable quota for a bucket
volume.delete # delete a live volume from one volume server s3.bucket.quota.enforce # check quota for all buckets, make the bucket read only if over the limit
volume.deleteEmpty # delete empty volumes from all volume servers s3.circuitBreaker # configure and apply s3 circuit breaker options for each bucket
volume.fix.replication # add or remove replicas to volumes that are missing replicas or over-replicated s3.clean.uploads # clean up stale multipart uploads
volume.fsck # check all volumes to find entries not used by the filer s3.configure # configure and apply s3 options for each bucket
volume.list # list all volumes unlock # unlock the cluster-wide lock
volume.mark # Mark volume writable or readonly from one volume server volume.balance # balance all volumes among volume servers
volume.mount # mount a volume from one volume server volume.check.disk # check all replicated volumes to find and fix inconsistencies. It is optional and resource intensive.
volume.move # move a live volume from one volume server to another volume server volume.configure.replication # change volume replication value
volume.tier.download # download the dat file of a volume from a remote tier volume.copy # copy a volume from one volume server to another volume server
volume.tier.move # change a volume from one disk type to another volume.delete # delete a live volume from one volume server
volume.tier.upload # upload the dat file of a volume to a remote tier volume.deleteEmpty # delete empty volumes from all volume servers
volume.unmount # unmount a volume from one volume server volume.fix.replication # add or remove replicas to volumes that are missing replicas or over-replicated
volume.vacuum # compact volumes if deleted entries are more than the limit volume.fsck # check all volumes to find entries not used by the filer
volumeServer.evacuate # move out all data on a volume server volume.list # list all volumes
volumeServer.leave # stop a volume server from sending heartbeats to the master volume.mark # Mark volume writable or readonly from one volume server
volume.mount # mount a volume from one volume server
volume.move # move a live volume from one volume server to another volume server
volume.tier.download # download the dat file of a volume from a remote tier
volume.tier.move # change a volume from one disk type to another
volume.tier.upload # upload the dat file of a volume to a remote tier
volume.unmount # unmount a volume from one volume server
volume.vacuum # compact volumes if deleted entries are more than the limit
volume.vacuum.disable # disable vacuuming request from Master, however volume.vacuum still works.
volume.vacuum.enable # enable vacuuming request from Master
volumeServer.evacuate # move out all data on a volume server
volumeServer.leave # stop a volume server from sending heartbeats to the master
``` ```
For example: For example: