- How to access the server dashboard?
- Does it support xxx language?
- Does it support FUSE?
- Does it support large files, e.g., 500M ~ 10G?
- How to configure volumes larger than 30GB?
- Why my 010 replicated volume files have different size?
- Why weed volume server loses connection with master?
- How to store large logs?
- Mount Filer
How to access the server dashboard?
SeaweedFS has web dashboards for its different services:
- Master server dashboards can be accessed on
http://hostname:port
in a web browser.For example:http://localhost:9333
. - Volume server dashboards can be accessed on
http://hostname:port/ui/index.html
.For example:http://localhost:8080/ui/index.html
Also see #275.
Does it support xxx language?
If using weed filer
, just send one HTTP POST to write, or one HTTP GET to read.
If using SeaweedFS for block storage, you may try to reuse some existing libraries.
The internal management APIs are in gRPC. You can generate the language bindings for your own purpose.
Does it support FUSE?
Yes.
Does it support large files, e.g., 500M ~ 10G?
Large file will be automatically split into chunks, in weed filer
, weed mount
, etc.
How to configure volumes larger than 30GB?
Before 1.29, the maximum volume size is limited to 30GB. However, with recent larger disks, one 8TB hard drive can hold 200+ volumes. The large amount of volumes introduces unnecessary work load for master.
Since 1.29, there are separate builds, with _large_disk
in the file names:
- darwin_amd64_large_disk.tar.gz
- linux_amd64_large_disk.tar.gz
- windows_amd64_large_disk.zip
These builds are not compatible with normal 30GB versions. The large disk
version uses 17 bytes for each file entry, while previously each file entry needs 16 bytes.
To upgrade to large disk
version,
- remove
*.idx
files - use the large-disk version, run
weed fix
to re-generate the*.idx
files - start master with a larger volume size limit
- start volume servers, with reasonable maximum number of volumes
Why my 010 replicated volume files have different size?
The volumes are consistent, but not necessarily the same size or the same number of files. This could be due to these reasons:
- If some files are written only to some but not all of the replicas, the writes are considered failed (A best-effort attempt will try to delete the written files).
- The compaction may not happen at exactly the same time.
Why weed volume server loses connection with master?
You can increase the "-pulseSeconds" on master from default 5 seconds to some higher number.See #100 https://github.com/chrislusf/seaweedfs/issues/100
How to store large logs?
The log files are usually very large. But SeaweedFS is mostly for small-to-medium large files. How to store them? "weed filer" can help.
Usually the logs are collected during a long period of time span. Let's say each day's log is about a manageable 128MB. You can store each day's log via "weed filer" under "/logs/" folder. For example:
/logs/2015-01-01.log
/logs/2015-01-02.log
/logs/2015-01-03.log
/logs/2015-01-04.log
Mount Filer
weed mount error after restarting
If you mount SeaweedFS filer on MacOS, sometimes when restarting "weed mount -dir xxx", you may see this error:
mount helper error: mount_osxfuse: mount point xxx is itself on a OSXFUSE volume
To fix this, do mount:
chris:tmp chris$ mount
/dev/disk1s1 on / (apfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
/dev/disk1s4 on /private/var/vm (apfs, local, noexec, journaled, noatime, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
map -fstab on /Network/Servers (autofs, automounted, nobrowse)
/dev/disk2 on /Volumes/FUSE for macOS (hfs, local, nodev, nosuid, read-only, noowners, quarantine, mounted by chris)
weed@osxfuse0 on /Users/chris/tmp/mm (osxfuse, local, nodev, nosuid, synchronous, mounted by chris)
The last line shows the folder that already mounted something. Need to unmount it first.
chris:tmp chris$ umount weed@osxfuse0
That should be it!