1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-16 08:13:29 +00:00

Compare commits

..

5 Commits
v0.96 ... v0.97

Author SHA1 Message Date
Nick Craig-Wood
c6dfd5f2d3 Version 0.97 2014-05-05 22:25:29 +01:00
Nick Craig-Wood
99695d57ab Implement single file operations for all file systems 2014-05-05 22:17:57 +01:00
Nick Craig-Wood
ca3752f824 s3: support sub-bucket paths 2014-05-05 18:26:37 +01:00
Nick Craig-Wood
d0ca58bbb1 swift: Support sub container paths 2014-05-05 18:26:37 +01:00
Nick Craig-Wood
580fa3a5a7 Documentation updates 2014-04-26 17:43:41 +01:00
19 changed files with 471 additions and 171 deletions

View File

@@ -7,7 +7,7 @@ Rclone
[![Logo](http://rclone.org/img/rclone-120x120.png)](http://rclone.org/) [![Logo](http://rclone.org/img/rclone-120x120.png)](http://rclone.org/)
Sync files and directories to and from Rclone is a command line program to sync files and directories to and from
* Google Drive * Google Drive
* Amazon S3 * Amazon S3
@@ -43,15 +43,13 @@ Or alternatively if you have Go installed use
and this will build the binary in `$GOPATH/bin`. and this will build the binary in `$GOPATH/bin`.
You can then modify the source and submit patches.
Configure Configure
--------- ---------
First you'll need to configure rclone. As the object storage systems First you'll need to configure rclone. As the object storage systems
have quite complicated authentication these are kept in a config file have quite complicated authentication these are kept in a config file
`.rclone.conf` in your home directory by default. (You can use the `.rclone.conf` in your home directory by default. (You can use the
-config option to choose a different config file.) `--config` option to choose a different config file.)
The easiest way to make the config is to run rclone with the config The easiest way to make the config is to run rclone with the config
option, Eg option, Eg
@@ -63,7 +61,7 @@ Usage
Rclone syncs a directory tree from local to remote. Rclone syncs a directory tree from local to remote.
Its basic syntax is like this Its basic syntax is
Syntax: [options] subcommand <parameters> <parameters...> Syntax: [options] subcommand <parameters> <parameters...>
@@ -84,7 +82,7 @@ Sync the source to the destination. Doesn't transfer
unchanged files, testing first by modification time then by unchanged files, testing first by modification time then by
MD5SUM. Deletes any files that exist in source that don't MD5SUM. Deletes any files that exist in source that don't
exist in destination. Since this can cause data loss, test exist in destination. Since this can cause data loss, test
first with the -dry-run flag. first with the `--dry-run` flag.
rclone ls [remote:path] rclone ls [remote:path]
@@ -92,7 +90,7 @@ List all the objects in the the path.
rclone lsd [remote:path] rclone lsd [remote:path]
List all directoryes/objects/buckets in the the path. List all directories/objects/buckets in the the path.
rclone mkdir remote:path rclone mkdir remote:path
@@ -114,17 +112,23 @@ compares sizes and MD5SUMs and prints a report of files which
don't match. It doesn't alter the source or destination. don't match. It doesn't alter the source or destination.
General options: General options:
* `-config` Location of the config file
* `-transfers=4`: Number of file transfers to run in parallel. ```
* `-checkers=8`: Number of MD5SUM checkers to run in parallel. --checkers=8: Number of checkers to run in parallel.
* `-dry-run=false`: Do a trial run with no permanent changes --config="~/.rclone.conf": Config file.
* `-modify-window=1ns`: Max time difference to be considered the same - this is automatically set usually -n, --dry-run=false: Do a trial run with no permanent changes
* `-quiet=false`: Print as little stuff as possible --modify-window=1ns: Max time diff to be considered the same
* `-stats=1m0s`: Interval to print stats -q, --quiet=false: Print as little stuff as possible
* `-verbose=false`: Print lots more stuff --stats=1m0s: Interval to print stats
--transfers=4: Number of file transfers to run in parallel.
-v, --verbose=false: Print lots more stuff
```
Developer options: Developer options:
* `-cpuprofile=""`: Write cpu profile to file
```
--cpuprofile="": Write cpu profile to file
```
Local Filesystem Local Filesystem
---------------- ----------------
@@ -133,13 +137,14 @@ Paths are specified as normal filesystem paths, so
rclone sync /home/source /tmp/destination rclone sync /home/source /tmp/destination
Will sync source to destination Will sync `/home/source` to `/tmp/destination`
Swift / Rackspace cloudfiles / Memset Memstore Swift / Rackspace cloudfiles / Memset Memstore
---------------------------------------------- ----------------------------------------------
Paths are specified as remote:container (or remote: for the `lsd` Paths are specified as remote:container (or remote: for the `lsd`
command.) command.) You may put subdirectories in too, eg
`remote:container/path/to/dir`.
So to copy a local directory to a swift container called backup: So to copy a local directory to a swift container called backup:
@@ -155,7 +160,8 @@ os.Stat) for an object.
Amazon S3 Amazon S3
--------- ---------
Paths are specified as remote:bucket Paths are specified as remote:bucket. You may put subdirectories in
too, eg `remote:bucket/path/to/dir`.
So to copy a local directory to a s3 container called backup So to copy a local directory to a s3 container called backup
@@ -170,7 +176,7 @@ Google drive
Paths are specified as drive:path Drive paths may be as deep as required. Paths are specified as drive:path Drive paths may be as deep as required.
The initial setup for drive involves getting a token from Google drive The initial setup for drive involves getting a token from Google drive
which you need to do in your browser. The `rclone config` walks you which you need to do in your browser. `rclone config` walks you
through it. through it.
To copy a local directory to a drive directory called backup To copy a local directory to a drive directory called backup
@@ -179,6 +185,19 @@ To copy a local directory to a drive directory called backup
Google drive stores modification times accurate to 1 ms. Google drive stores modification times accurate to 1 ms.
Single file copies
------------------
Rclone can copy single files
rclone src:path/to/file dst:path/dir
Or
rclone src:path/to/file dst:path/to/file
Note that you can't rename the file if you are copying from one file to another.
License License
------- -------
@@ -188,7 +207,6 @@ COPYING file included in this package).
Bugs Bugs
---- ----
* Doesn't sync individual files yet, only directories.
* Drive: Sometimes get: Failed to copy: Upload failed: googleapi: Error 403: Rate Limit Exceeded * Drive: Sometimes get: Failed to copy: Upload failed: googleapi: Error 403: Rate Limit Exceeded
* quota is 100.0 requests/second/user * quota is 100.0 requests/second/user
* Empty directories left behind with Local and Drive * Empty directories left behind with Local and Drive
@@ -197,6 +215,9 @@ Bugs
Changelog Changelog
--------- ---------
* v0.97 - 2014-05-05
* Implement copying of single files
* s3 & swift: support paths inside containers/buckets
* v0.96 - 2014-04-24 * v0.96 - 2014-04-24
* drive: Fix multiple files of same name being created * drive: Fix multiple files of same name being created
* drive: Use o.Update and fs.Put to optimise transfers * drive: Use o.Update and fs.Put to optimise transfers
@@ -228,7 +249,7 @@ The project website is at:
* https://github.com/ncw/rclone * https://github.com/ncw/rclone
There you can file bug reports, ask for help or contribute patches. There you can file bug reports, ask for help or send pull requests.
Authors Authors
------- -------

View File

@@ -2,7 +2,7 @@
title: "Rclone" title: "Rclone"
description: "rclone syncs files to and from Google Drive, S3, Swift and Cloudfiles." description: "rclone syncs files to and from Google Drive, S3, Swift and Cloudfiles."
type: page type: page
date: "2014-03-19" date: "2014-04-26"
groups: ["about"] groups: ["about"]
--- ---
@@ -11,7 +11,7 @@ Rclone
[![Logo](/img/rclone-120x120.png)](http://rclone.org/) [![Logo](/img/rclone-120x120.png)](http://rclone.org/)
Sync files and directories to and from Rclone is a command line program to sync files and directories to and from
* Google Drive * Google Drive
* Amazon S3 * Amazon S3

View File

@@ -1,12 +1,12 @@
--- ---
title: "Contact" title: "Contact"
description: "Contact the rclone project" description: "Contact the rclone project"
date: "2014-03-19" date: "2014-04-26"
--- ---
Contact the rclone project Contact the rclone project
* [Github project page for source and reporting bugs](http://github.com/ncw/rclone) * [Github project page for source, reporting bugs and pull requests](http://github.com/ncw/rclone)
* <a href="https://plus.google.com/110609214444437761115" rel="publisher">Google+ page for general comments</a></li> * <a href="https://plus.google.com/110609214444437761115" rel="publisher">Google+ page for general comments</a></li>
Or email [Nick Craig-Wood](mailto:nick@craig-wood.com) Or email [Nick Craig-Wood](mailto:nick@craig-wood.com)

View File

@@ -1,7 +1,7 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Documentation" description: "Rclone Documentation"
date: "2014-03-19" date: "2014-04-26"
--- ---
Install Install
@@ -9,7 +9,7 @@ Install
Rclone is a Go program and comes as a single binary file. Rclone is a Go program and comes as a single binary file.
[Download the relevant binary.](/downloads/) [Download](/downloads/) the relevant binary.
Or alternatively if you have Go installed use Or alternatively if you have Go installed use
@@ -17,15 +17,13 @@ Or alternatively if you have Go installed use
and this will build the binary in `$GOPATH/bin`. and this will build the binary in `$GOPATH/bin`.
You can then modify the source and submit patches.
Configure Configure
--------- ---------
First you'll need to configure rclone. As the object storage systems First you'll need to configure rclone. As the object storage systems
have quite complicated authentication these are kept in a config file have quite complicated authentication these are kept in a config file
`.rclone.conf` in your home directory by default. (You can use the `.rclone.conf` in your home directory by default. (You can use the
`-config` option to choose a different config file.) `--config` option to choose a different config file.)
The easiest way to make the config is to run rclone with the config The easiest way to make the config is to run rclone with the config
option: option:

View File

@@ -2,34 +2,34 @@
title: "Rclone downloads" title: "Rclone downloads"
description: "Download rclone binaries for your OS." description: "Download rclone binaries for your OS."
type: page type: page
date: "2014-04-25" date: "2014-05-05"
--- ---
v0.96 v0.97
===== =====
* Windows * Windows
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.96-windows-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.97-windows-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.96-windows-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.97-windows-amd64.zip)
* OSX * OSX
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.96-osx-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.97-osx-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.96-osx-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.97-osx-amd64.zip)
* Linux * Linux
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.96-linux-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.97-linux-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.96-linux-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.97-linux-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v0.96-linux-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v0.97-linux-arm.zip)
* FreeBSD * FreeBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.96-freebsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.97-freebsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.96-freebsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.97-freebsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v0.96-freebsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v0.97-freebsd-arm.zip)
* NetBSD * NetBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.96-netbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.97-netbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.96-netbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.97-netbsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v0.96-netbsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v0.97-netbsd-arm.zip)
* OpenBSD * OpenBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.96-openbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.97-openbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.96-openbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v0.97-openbsd-amd64.zip)
* Plan 9 * Plan 9
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.96-plan9-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v0.97-plan9-386.zip)
Older downloads can be found [here](http://downloads.rclone.org/) Older downloads can be found [here](http://downloads.rclone.org/)

View File

@@ -1,7 +1,7 @@
--- ---
title: "Google drive" title: "Google drive"
description: "Rclone docs for Google drive" description: "Rclone docs for Google drive"
date: "2014-03-19" date: "2014-04-26"
--- ---
Paths are specified as `drive:path` Paths are specified as `drive:path`

View File

@@ -1,7 +1,7 @@
--- ---
title: "Local Filesystem" title: "Local Filesystem"
description: "Rclone docs for the local filesystem" description: "Rclone docs for the local filesystem"
date: "2014-03-19" date: "2014-04-26"
--- ---
Local Filesystem Local Filesystem
@@ -19,7 +19,7 @@ but it is probably easier not to.
Modified time Modified time
------------- -------------
We read and write the modified time using an accuracy determined by Rclone reads and writes the modified time using an accuracy determined by
the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
on OS X. on OS X.

View File

@@ -1,7 +1,7 @@
--- ---
title: "Amazon S3" title: "Amazon S3"
description: "Rclone docs for Amazon S3" description: "Rclone docs for Amazon S3"
date: "2014-03-19" date: "2014-04-26"
--- ---
Paths are specified as `remote:bucket` or `remote:` Paths are specified as `remote:bucket` or `remote:`

View File

@@ -1,7 +1,7 @@
--- ---
title: "Swift" title: "Swift"
description: "Swift" description: "Swift"
date: "2014-03-19" date: "2014-04-26"
--- ---
Swift refers to [Openstack Object Storage](http://www.openstack.org/software/openstack-storage/). Swift refers to [Openstack Object Storage](http://www.openstack.org/software/openstack-storage/).

View File

@@ -140,7 +140,8 @@ type FsDrive struct {
client *http.Client // authorized client client *http.Client // authorized client
about *drive.About // information about the drive, including the root about *drive.About // information about the drive, including the root
rootId string // Id of the root directory rootId string // Id of the root directory
foundRoot sync.Once // Whether we need to find the root directory or not foundRoot bool // Whether we have found the root or not
findRootLock sync.Mutex // Protect findRoot from concurrent use
dirCache dirCache // Map of directory path to directory id dirCache dirCache // Map of directory path to directory id
findDirLock sync.Mutex // Protect findDir from concurrent use findDirLock sync.Mutex // Protect findDir from concurrent use
} }
@@ -305,7 +306,10 @@ func NewFs(name, path string) (fs.Fs, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
f := &FsDrive{root: root, dirCache: newDirCache()} f := &FsDrive{
root: root,
dirCache: newDirCache(),
}
// Try to pull the token from the cache; if this fails, we need to get one. // Try to pull the token from the cache; if this fails, we need to get one.
token, err := t.Config.TokenCache.Token() token, err := t.Config.TokenCache.Token()
@@ -331,14 +335,33 @@ func NewFs(name, path string) (fs.Fs, error) {
f.rootId = f.about.RootFolderId f.rootId = f.about.RootFolderId
// Put the root directory in // Put the root directory in
f.dirCache.Put("", f.rootId) f.dirCache.Put("", f.rootId)
// Find the current root
err = f.findRoot(false)
if err != nil {
// Assume it is a file
newRoot, remote := splitPath(root)
newF := *f
newF.root = newRoot
// Make new Fs which is the parent
err = newF.findRoot(false)
if err != nil {
// No root so return old f
return f, nil
}
obj, err := newF.newFsObjectWithInfo(remote, nil)
if err != nil {
// File doesn't exist so return old f
return f, nil
}
// return a Fs Limited to this object
return fs.NewLimited(&newF, obj), nil
}
// fmt.Printf("Root id %s", f.rootId) // fmt.Printf("Root id %s", f.rootId)
return f, nil return f, nil
} }
// Return an FsObject from a path // Return an FsObject from a path
// func (f *FsDrive) newFsObjectWithInfo(remote string, info *drive.File) (fs.Object, error) {
// May return nil if an error occurred
func (f *FsDrive) NewFsObjectWithInfo(remote string, info *drive.File) fs.Object {
fs := &FsObjectDrive{ fs := &FsObjectDrive{
drive: f, drive: f,
remote: remote, remote: remote,
@@ -349,9 +372,18 @@ func (f *FsDrive) NewFsObjectWithInfo(remote string, info *drive.File) fs.Object
err := fs.readMetaData() // reads info and meta, returning an error err := fs.readMetaData() // reads info and meta, returning an error
if err != nil { if err != nil {
// logged already fs.Debug("Failed to read info: %s", err) // logged already fs.Debug("Failed to read info: %s", err)
return nil return nil, err
} }
} }
return fs, nil
}
// Return an FsObject from a path
//
// May return nil if an error occurred
func (f *FsDrive) NewFsObjectWithInfo(remote string, info *drive.File) fs.Object {
fs, _ := f.newFsObjectWithInfo(remote, info)
// Errors have already been logged
return fs return fs
} }
@@ -585,14 +617,21 @@ func (f *FsDrive) _findDir(path string, create bool) (pathId string, err error)
// //
// If create is set it will make the directory if not found // If create is set it will make the directory if not found
func (f *FsDrive) findRoot(create bool) error { func (f *FsDrive) findRoot(create bool) error {
var err error f.findRootLock.Lock()
f.foundRoot.Do(func() { defer f.findRootLock.Unlock()
f.rootId, err = f.findDir(f.root, create) if f.foundRoot {
return nil
}
rootId, err := f.findDir(f.root, create)
if err != nil {
return err
}
f.rootId = rootId
f.dirCache.Flush() f.dirCache.Flush()
// Put the root directory in // Put the root directory in
f.dirCache.Put("", f.rootId) f.dirCache.Put("", f.rootId)
}) f.foundRoot = true
return err return nil
} }
// Walk the path returning a channel of FsObjects // Walk the path returning a channel of FsObjects

View File

@@ -18,9 +18,15 @@ var (
// Filesystem info // Filesystem info
type FsInfo struct { type FsInfo struct {
Name string // name of this fs // Name of this fs
NewFs func(string, string) (Fs, error) // create a new file system Name string
Config func(string) // function to call to help with config // Create a new file system. If root refers to an existing
// object, then it should return a Fs which only returns that
// object.
NewFs func(name string, root string) (Fs, error)
// Function to call to help with config
Config func(string)
// Options for the Fs configuration
Options []Option Options []Option
} }

88
fs/limited.go Normal file
View File

@@ -0,0 +1,88 @@
package fs
import (
"fmt"
"io"
"time"
)
// This defines a Limited Fs which can only return the Objects passed in from the Fs passed in
type Limited struct {
objects []Object
fs Fs
}
// NewLimited maks a limited Fs limited to the objects passed in
func NewLimited(fs Fs, objects ...Object) Fs {
f := &Limited{
objects: objects,
fs: fs,
}
return f
}
// String returns a description of the FS
func (f *Limited) String() string {
return fmt.Sprintf("%s limited to %d objects", f.fs.String(), len(f.objects))
}
// List the Fs into a channel
func (f *Limited) List() ObjectsChan {
out := make(ObjectsChan, Config.Checkers)
go func() {
for _, obj := range f.objects {
out <- obj
}
close(out)
}()
return out
}
// List the Fs directories/buckets/containers into a channel
func (f *Limited) ListDir() DirChan {
out := make(DirChan, Config.Checkers)
close(out)
return out
}
// Find the Object at remote. Returns nil if can't be found
func (f *Limited) NewFsObject(remote string) Object {
for _, obj := range f.objects {
if obj.Remote() == remote {
return obj
}
}
return nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Limited) Put(in io.Reader, remote string, modTime time.Time, size int64) (Object, error) {
obj := f.NewFsObject(remote)
if obj == nil {
return nil, fmt.Errorf("Can't create %q in limited fs", remote)
}
return obj, obj.Update(in, modTime, size)
}
// Make the directory (container, bucket)
func (f *Limited) Mkdir() error {
// All directories are already made - just ignore
return nil
}
// Remove the directory (container, bucket) if empty
func (f *Limited) Rmdir() error {
return fmt.Errorf("Can't rmdir in limited fs")
}
// Precision of the ModTimes in this Fs
func (f *Limited) Precision() time.Duration {
return f.fs.Precision()
}
// Check the interfaces are satisfied
var _ Fs = &Limited{}

View File

@@ -45,6 +45,16 @@ type FsObjectLocal struct {
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string) (fs.Fs, error) {
root = path.Clean(root) root = path.Clean(root)
f := &FsLocal{root: root} f := &FsLocal{root: root}
// Check to see if this points to a file
fi, err := os.Lstat(f.root)
if err == nil && fi.Mode().IsRegular() {
// It is a file, so use the parent as the root
remote := path.Base(root)
f.root = path.Dir(root)
obj := f.NewFsObject(remote)
// return a Fs Limited to this object
return fs.NewLimited(f, obj), nil
}
return f, nil return f, nil
} }

View File

@@ -1,5 +1,4 @@
Todo Todo
* FIXME: ls without an argument for buckets/containers?
* FIXME: More -dry-run checks for object transfer * FIXME: More -dry-run checks for object transfer
* Might be quicker to check md5sums first? for swift <-> swift certainly, and maybe for small files * Might be quicker to check md5sums first? for swift <-> swift certainly, and maybe for small files
* swift: Ignoring the pseudo directories * swift: Ignoring the pseudo directories
@@ -12,7 +11,6 @@ Todo
* make Account do progress meter * make Account do progress meter
* Make logging controllable with flags (mostly done) * Make logging controllable with flags (mostly done)
* -timeout: Make all timeouts be settable with command line parameters * -timeout: Make all timeouts be settable with command line parameters
* Check the locking in swift module!
* Windows paths? Do we need to translate / and \? * Windows paths? Do we need to translate / and \?
* Make a fs.Errorf and count errors and log them at a different level * Make a fs.Errorf and count errors and log them at a different level
* Add max object size to fs metadata - 5GB for swift, infinite for local, ? for s3 * Add max object size to fs metadata - 5GB for swift, infinite for local, ? for s3
@@ -22,7 +20,6 @@ Ideas
* could do encryption - put IV into metadata? * could do encryption - put IV into metadata?
* optimise remote copy container to another container using remote * optimise remote copy container to another container using remote
copy if local is same as remote - use an optional Copier interface copy if local is same as remote - use an optional Copier interface
* Allow subpaths container:/sub/path
* support * support
* sftp * sftp
* scp * scp
@@ -35,6 +32,8 @@ Need to make directory objects otherwise can't upload an empty directory
* Or could upload empty directories only? * Or could upload empty directories only?
* Can't purge a local filesystem because it leaves the directories behind * Can't purge a local filesystem because it leaves the directories behind
Copying a single file? Or maybe with a glob pattern? Could do with LimitedFs
s3 s3
* Can maybe set last modified? * Can maybe set last modified?
* https://forums.aws.amazon.com/message.jspa?messageID=214062 * https://forums.aws.amazon.com/message.jspa?messageID=214062
@@ -43,6 +42,7 @@ s3
Bugs Bugs
* Non verbose - not sure number transferred got counted up? CHECK * Non verbose - not sure number transferred got counted up? CHECK
* When doing copy it recurses the whole of the destination FS which isn't necessary
Making a release Making a release
* go build ./... * go build ./...

View File

@@ -106,7 +106,7 @@ var Commands = []Command{
Name: "lsd", Name: "lsd",
ArgsHelp: "[remote://path]", ArgsHelp: "[remote://path]",
Help: ` Help: `
List all directoryes/objects/buckets in the the path.`, List all directories/containers/buckets in the the path.`,
Run: func(fdst, fsrc fs.Fs) { Run: func(fdst, fsrc fs.Fs) {
err := fs.ListDir(fdst) err := fs.ListDir(fdst)
if err != nil { if err != nil {

View File

@@ -1,3 +1,3 @@
package main package main
const Version = "v0.96" const Version = "v0.97"

114
s3/s3.go
View File

@@ -7,7 +7,6 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"log"
"mime" "mime"
"net/http" "net/http"
"path" "path"
@@ -111,6 +110,7 @@ type FsS3 struct {
b *s3.Bucket // the connection to the bucket b *s3.Bucket // the connection to the bucket
bucket string // the bucket we are working on bucket string // the bucket we are working on
perm s3.ACL // permissions for new buckets / objects perm s3.ACL // permissions for new buckets / objects
root string // root of the bucket - ignore all objects above this
} }
// FsObjectS3 describes a s3 object // FsObjectS3 describes a s3 object
@@ -131,7 +131,10 @@ type FsObjectS3 struct {
// String converts this FsS3 to a string // String converts this FsS3 to a string
func (f *FsS3) String() string { func (f *FsS3) String() string {
if f.root == "" {
return fmt.Sprintf("S3 bucket %s", f.bucket) return fmt.Sprintf("S3 bucket %s", f.bucket)
}
return fmt.Sprintf("S3 bucket %s path %s", f.bucket, f.root)
} }
// Pattern to match a s3 path // Pattern to match a s3 path
@@ -185,14 +188,11 @@ func s3Connection(name string) (*s3.S3, error) {
} }
// NewFsS3 contstructs an FsS3 from the path, bucket:path // NewFsS3 contstructs an FsS3 from the path, bucket:path
func NewFs(name, path string) (fs.Fs, error) { func NewFs(name, root string) (fs.Fs, error) {
bucket, directory, err := s3ParsePath(path) bucket, directory, err := s3ParsePath(root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if directory != "" {
return nil, fmt.Errorf("Directories not supported yet in %q: %q", path, directory)
}
c, err := s3Connection(name) c, err := s3Connection(name)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -202,6 +202,24 @@ func NewFs(name, path string) (fs.Fs, error) {
bucket: bucket, bucket: bucket,
b: c.Bucket(bucket), b: c.Bucket(bucket),
perm: s3.Private, // FIXME need user to specify perm: s3.Private, // FIXME need user to specify
root: directory,
}
if f.root != "" {
f.root += "/"
// Check to see if the object exists
_, err = f.b.Head(directory, nil)
if err == nil {
remote := path.Base(directory)
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
}
obj := f.NewFsObject(remote)
// return a Fs Limited to this object
return fs.NewLimited(f, obj), nil
}
} }
return f, nil return f, nil
} }
@@ -241,37 +259,76 @@ func (f *FsS3) NewFsObject(remote string) fs.Object {
return f.NewFsObjectWithInfo(remote, nil) return f.NewFsObjectWithInfo(remote, nil)
} }
// Walk the path returning a channel of FsObjects // list the objects into the function supplied
func (f *FsS3) List() fs.ObjectsChan { //
out := make(fs.ObjectsChan, fs.Config.Checkers) // If directories is set it only sends directories
go func() { func (f *FsS3) list(directories bool, fn func(string, *s3.Key)) {
delimiter := ""
if directories {
delimiter = "/"
}
// FIXME need to implement ALL loop // FIXME need to implement ALL loop
objects, err := f.b.List("", "", "", 10000) objects, err := f.b.List(f.root, delimiter, "", 10000)
if err != nil { if err != nil {
fs.Stats.Error() fs.Stats.Error()
log.Printf("Couldn't read bucket %q: %s", f.bucket, err) fs.Log(f, "Couldn't read bucket %q: %s", f.bucket, err)
} else {
rootLength := len(f.root)
if directories {
for _, remote := range objects.CommonPrefixes {
if !strings.HasPrefix(remote, f.root) {
fs.Log(f, "Odd name received %q", remote)
continue
}
remote := remote[rootLength:]
fn(remote, &s3.Key{Key: remote})
}
} else { } else {
for i := range objects.Contents { for i := range objects.Contents {
object := &objects.Contents[i] object := &objects.Contents[i]
if fs := f.NewFsObjectWithInfo(object.Key, object); fs != nil { if !strings.HasPrefix(object.Key, f.root) {
fs.Log(f, "Odd name received %q", object.Key)
continue
}
remote := object.Key[rootLength:]
fn(remote, object)
}
}
}
}
// Walk the path returning a channel of FsObjects
func (f *FsS3) List() fs.ObjectsChan {
out := make(fs.ObjectsChan, fs.Config.Checkers)
if f.bucket == "" {
// Return no objects at top level list
close(out)
fs.Stats.Error()
fs.Log(f, "Can't list objects at root - choose a bucket using lsd")
} else {
go func() {
defer close(out)
f.list(false, func(remote string, object *s3.Key) {
if fs := f.NewFsObjectWithInfo(remote, object); fs != nil {
out <- fs out <- fs
} }
} })
}
close(out)
}() }()
}
return out return out
} }
// Lists the buckets // Lists the buckets
func (f *FsS3) ListDir() fs.DirChan { func (f *FsS3) ListDir() fs.DirChan {
out := make(fs.DirChan, fs.Config.Checkers) out := make(fs.DirChan, fs.Config.Checkers)
if f.bucket == "" {
// List the buckets
go func() { go func() {
defer close(out) defer close(out)
buckets, err := f.c.ListBuckets() buckets, err := f.c.ListBuckets()
if err != nil { if err != nil {
fs.Stats.Error() fs.Stats.Error()
log.Printf("Couldn't list buckets: %s", err) fs.Log(f, "Couldn't list buckets: %s", err)
} else { } else {
for _, bucket := range buckets { for _, bucket := range buckets {
out <- &fs.Dir{ out <- &fs.Dir{
@@ -283,6 +340,19 @@ func (f *FsS3) ListDir() fs.DirChan {
} }
} }
}() }()
} else {
// List the directories in the path in the bucket
go func() {
defer close(out)
f.list(true, func(remote string, object *s3.Key) {
out <- &fs.Dir{
Name: remote,
Bytes: object.Size,
Count: 0,
}
})
}()
}
return out return out
} }
@@ -354,7 +424,7 @@ func (o *FsObjectS3) readMetaData() (err error) {
return nil return nil
} }
headers, err := o.s3.b.Head(o.remote, nil) headers, err := o.s3.b.Head(o.s3.root+o.remote, nil)
if err != nil { if err != nil {
fs.Debug(o, "Failed to read info: %s", err) fs.Debug(o, "Failed to read info: %s", err)
return err return err
@@ -407,7 +477,7 @@ func (o *FsObjectS3) SetModTime(modTime time.Time) {
return return
} }
o.meta[metaMtime] = swift.TimeToFloatString(modTime) o.meta[metaMtime] = swift.TimeToFloatString(modTime)
_, err = o.s3.b.Update(o.remote, o.s3.perm, o.meta) _, err = o.s3.b.Update(o.s3.root+o.remote, o.s3.perm, o.meta)
if err != nil { if err != nil {
fs.Stats.Error() fs.Stats.Error()
fs.Log(o, "Failed to update remote mtime: %s", err) fs.Log(o, "Failed to update remote mtime: %s", err)
@@ -421,7 +491,7 @@ func (o *FsObjectS3) Storable() bool {
// Open an object for read // Open an object for read
func (o *FsObjectS3) Open() (in io.ReadCloser, err error) { func (o *FsObjectS3) Open() (in io.ReadCloser, err error) {
in, err = o.s3.b.GetReader(o.remote) in, err = o.s3.b.GetReader(o.s3.root + o.remote)
return return
} }
@@ -438,13 +508,13 @@ func (o *FsObjectS3) Update(in io.Reader, modTime time.Time, size int64) error {
contentType = "application/octet-stream" contentType = "application/octet-stream"
} }
_, err := o.s3.b.PutReaderHeaders(o.remote, in, size, contentType, o.s3.perm, headers) _, err := o.s3.b.PutReaderHeaders(o.s3.root+o.remote, in, size, contentType, o.s3.perm, headers)
return err return err
} }
// Remove an object // Remove an object
func (o *FsObjectS3) Remove() error { func (o *FsObjectS3) Remove() error {
return o.s3.b.Del(o.remote) return o.s3.b.Del(o.s3.root + o.remote)
} }
// Check the interfaces are satisfied // Check the interfaces are satisfied

View File

@@ -1,13 +1,11 @@
// Swift interface // Swift interface
package swift package swift
// FIXME need to prevent anything but ListDir working for swift://
import ( import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"log" "path"
"regexp" "regexp"
"strings" "strings"
"time" "time"
@@ -73,7 +71,10 @@ type FsObjectSwift struct {
// String converts this FsSwift to a string // String converts this FsSwift to a string
func (f *FsSwift) String() string { func (f *FsSwift) String() string {
if f.root == "" {
return fmt.Sprintf("Swift container %s", f.container) return fmt.Sprintf("Swift container %s", f.container)
}
return fmt.Sprintf("Swift container %s path %s", f.container, f.root)
} }
// Pattern to match a swift path // Pattern to match a swift path
@@ -118,19 +119,37 @@ func swiftConnection(name string) (*swift.Connection, error) {
} }
// NewFs contstructs an FsSwift from the path, container:path // NewFs contstructs an FsSwift from the path, container:path
func NewFs(name, path string) (fs.Fs, error) { func NewFs(name, root string) (fs.Fs, error) {
container, directory, err := parsePath(path) container, directory, err := parsePath(root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if directory != "" {
return nil, fmt.Errorf("Directories not supported yet in %q", path)
}
c, err := swiftConnection(name) c, err := swiftConnection(name)
if err != nil { if err != nil {
return nil, err return nil, err
} }
f := &FsSwift{c: *c, container: container, root: directory} f := &FsSwift{
c: *c,
container: container,
root: directory,
}
if f.root != "" {
f.root += "/"
// Check to see if the object exists
_, _, err = f.c.Object(container, directory)
if err == nil {
remote := path.Base(directory)
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
}
obj := f.NewFsObject(remote)
// return a Fs Limited to this object
return fs.NewLimited(f, obj), nil
}
}
return f, nil return f, nil
} }
@@ -162,41 +181,77 @@ func (f *FsSwift) NewFsObject(remote string) fs.Object {
return f.NewFsObjectWithInfo(remote, nil) return f.NewFsObjectWithInfo(remote, nil)
} }
// Walk the path returning a channel of FsObjects // list the objects into the function supplied
func (f *FsSwift) List() fs.ObjectsChan { //
out := make(fs.ObjectsChan, fs.Config.Checkers) // If directories is set it only sends directories
go func() { func (f *FsSwift) list(directories bool, fn func(string, *swift.Object)) {
// FIXME use a smaller limit? // Options for ObjectsWalk
err := f.c.ObjectsWalk(f.container, nil, func(opts *swift.ObjectsOpts) (interface{}, error) { opts := swift.ObjectsOpts{
Prefix: f.root,
Limit: 256,
}
if directories {
opts.Delimiter = '/'
}
rootLength := len(f.root)
err := f.c.ObjectsWalk(f.container, &opts, func(opts *swift.ObjectsOpts) (interface{}, error) {
objects, err := f.c.Objects(f.container, opts) objects, err := f.c.Objects(f.container, opts)
if err == nil { if err == nil {
for i := range objects { for i := range objects {
object := &objects[i] object := &objects[i]
if fs := f.NewFsObjectWithInfo(object.Name, object); fs != nil { // FIXME if there are no directories, swift gives back the files for some reason!
out <- fs if directories && !strings.HasSuffix(object.Name, "/") {
continue
} }
if !strings.HasPrefix(object.Name, f.root) {
fs.Log(f, "Odd name received %q", object.Name)
continue
}
remote := object.Name[rootLength:]
fn(remote, object)
} }
} }
return objects, err return objects, err
}) })
if err != nil { if err != nil {
fs.Stats.Error() fs.Stats.Error()
log.Printf("Couldn't read container %q: %s", f.container, err) fs.Log(f, "Couldn't read container %q: %s", f.container, err)
} }
}
// Walk the path returning a channel of FsObjects
func (f *FsSwift) List() fs.ObjectsChan {
out := make(fs.ObjectsChan, fs.Config.Checkers)
if f.container == "" {
// Return no objects at top level list
close(out) close(out)
fs.Stats.Error()
fs.Log(f, "Can't list objects at root - choose a container using lsd")
} else {
// List the objects
go func() {
defer close(out)
f.list(false, func(remote string, object *swift.Object) {
if fs := f.NewFsObjectWithInfo(remote, object); fs != nil {
out <- fs
}
})
}() }()
}
return out return out
} }
// Lists the containers // Lists the containers
func (f *FsSwift) ListDir() fs.DirChan { func (f *FsSwift) ListDir() fs.DirChan {
out := make(fs.DirChan, fs.Config.Checkers) out := make(fs.DirChan, fs.Config.Checkers)
if f.container == "" {
// List the containers
go func() { go func() {
defer close(out) defer close(out)
containers, err := f.c.ContainersAll(nil) containers, err := f.c.ContainersAll(nil)
if err != nil { if err != nil {
fs.Stats.Error() fs.Stats.Error()
log.Printf("Couldn't list containers: %s", err) fs.Log(f, "Couldn't list containers: %v", err)
} else { } else {
for _, container := range containers { for _, container := range containers {
out <- &fs.Dir{ out <- &fs.Dir{
@@ -207,6 +262,19 @@ func (f *FsSwift) ListDir() fs.DirChan {
} }
} }
}() }()
} else {
// List the directories in the path in the container
go func() {
defer close(out)
f.list(true, func(remote string, object *swift.Object) {
out <- &fs.Dir{
Name: remote,
Bytes: object.Bytes,
Count: 0,
}
})
}()
}
return out return out
} }
@@ -275,7 +343,7 @@ func (o *FsObjectSwift) readMetaData() (err error) {
if o.meta != nil { if o.meta != nil {
return nil return nil
} }
info, h, err := o.swift.c.Object(o.swift.container, o.remote) info, h, err := o.swift.c.Object(o.swift.container, o.swift.root+o.remote)
if err != nil { if err != nil {
fs.Debug(o, "Failed to read info: %s", err) fs.Debug(o, "Failed to read info: %s", err)
return err return err
@@ -314,7 +382,7 @@ func (o *FsObjectSwift) SetModTime(modTime time.Time) {
return return
} }
o.meta.SetModTime(modTime) o.meta.SetModTime(modTime)
err = o.swift.c.ObjectUpdate(o.swift.container, o.remote, o.meta.ObjectHeaders()) err = o.swift.c.ObjectUpdate(o.swift.container, o.swift.root+o.remote, o.meta.ObjectHeaders())
if err != nil { if err != nil {
fs.Stats.Error() fs.Stats.Error()
fs.Log(o, "Failed to update remote mtime: %s", err) fs.Log(o, "Failed to update remote mtime: %s", err)
@@ -328,7 +396,7 @@ func (o *FsObjectSwift) Storable() bool {
// Open an object for read // Open an object for read
func (o *FsObjectSwift) Open() (in io.ReadCloser, err error) { func (o *FsObjectSwift) Open() (in io.ReadCloser, err error) {
in, _, err = o.swift.c.ObjectOpen(o.swift.container, o.remote, true, nil) in, _, err = o.swift.c.ObjectOpen(o.swift.container, o.swift.root+o.remote, true, nil)
return return
} }
@@ -339,13 +407,13 @@ func (o *FsObjectSwift) Update(in io.Reader, modTime time.Time, size int64) erro
// Set the mtime // Set the mtime
m := swift.Metadata{} m := swift.Metadata{}
m.SetModTime(modTime) m.SetModTime(modTime)
_, err := o.swift.c.ObjectPut(o.swift.container, o.remote, in, true, "", "", m.ObjectHeaders()) _, err := o.swift.c.ObjectPut(o.swift.container, o.swift.root+o.remote, in, true, "", "", m.ObjectHeaders())
return err return err
} }
// Remove an object // Remove an object
func (o *FsObjectSwift) Remove() error { func (o *FsObjectSwift) Remove() error {
return o.swift.c.ObjectDelete(o.swift.container, o.remote) return o.swift.c.ObjectDelete(o.swift.container, o.swift.root+o.remote)
} }
// Check the interfaces are satisfied // Check the interfaces are satisfied

View File

@@ -1,3 +1,3 @@
package main package main
const Version = "v0.96" const Version = "v0.97"