esync

Directory watching and remote syncing
Log | Files | Refs | README | LICENSE

commit d3d400cb0ec1310a02eefee9adbba80855bd0cc8
parent 70b23193e8ca4284dfcac3b9a8e007620121bcbe
Author: Erik Loualiche <eloualiche@users.noreply.github.com>
Date:   Sun,  1 Mar 2026 11:47:23 -0600

Rewrite esync in Go with Bubbletea TUI (#3)

* Add design document for Go rewrite

Covers architecture, TUI design, CLI commands, configuration schema,
and new features for rewriting esync from Python to Go with Cobra,
Bubbletea, Viper, and fsnotify.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add implementation plan for Go rewrite

14 tasks covering scaffolding, config, syncer, watcher, logger, TUI,
CLI commands (sync, init, check, edit, status), signal handling,
README, integration tests, and final polish. Includes dependency
graph and parallelization notes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: scaffold Go project with Cobra root command

Remove Python source files (esync/, pyproject.toml, uv.lock, .python-version)
and replace with Go project structure including Cobra CLI framework, placeholder
internal packages (config, syncer, watcher, tui, logger), and a working build.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add config package with TOML loading, defaults, and search path

Implements the core configuration package that other packages depend on:
- Config structs with mapstructure tags for TOML deserialization
- Load() with Viper for TOML parsing, defaults, and validation
- FindConfigFile()/FindConfigIn() for config file discovery
- IsRemote() to detect remote sync targets (SCP-style or SSH config)
- AllIgnorePatterns() combining settings and rsync ignore lists
- DefaultTOML() template for init command
- Comprehensive test suite (14 test cases)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: run go mod tidy to fix viper as direct dependency

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add syncer package with rsync command builder and SSH support

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add watcher package with fsnotify and debouncing

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add logger package with JSON and text output

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add TUI with dashboard, log view, and Lipgloss styles

Implement the terminal UI layer using Bubbletea and Lipgloss:
- styles.go: color palette (title, synced, syncing, error, dim, help)
- dashboard.go: main view with sync events, stats, filtering, pause toggle
- logview.go: scrollable log viewer with level coloring and filtering
- app.go: root model switching between dashboard and log views via channels

Removes the placeholder tui.go.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add sync command with TUI and daemon modes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add smart init command with .gitignore import

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add check and edit commands for config validation and preview

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add status command and PID file for daemon mode

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add graceful shutdown with signal handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: rewrite README for Go version with TOML examples

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test: add integration tests for local sync and watcher

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: add example config and update gitignore

Add a fully annotated esync.toml.example covering every config field.
Add the compiled esync binary to .gitignore.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* ci: add CI and release workflows

CI runs on pull requests to main: vet, test, build.
Release builds cross-compiled binaries (darwin/arm64, linux/amd64)
and creates a GitHub release when a version tag is pushed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Diffstat:
A.github/workflows/ci.yml | 24++++++++++++++++++++++++
A.github/workflows/release.yml | 56++++++++++++++++++++++++++++++++++++++++++++++++++++++++
M.gitignore | 3+++
D.python-version | 1-
Acmd/check.go | 219+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acmd/edit.go | 101+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acmd/init.go | 237+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acmd/root.go | 27+++++++++++++++++++++++++++
Acmd/status.go | 66++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acmd/sync.go | 336+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adocs/plans/2026-03-01-go-rewrite-design.md | 262+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adocs/plans/2026-03-01-go-rewrite-plan.md | 2924+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aesync.toml.example | 115+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Desync/__init__.py | 20--------------------
Desync/cli.py | 378-------------------------------------------------------------------------------
Desync/config.py | 186-------------------------------------------------------------------------------
Desync/sync_manager.py | 476-------------------------------------------------------------------------------
Desync/watchdog_watcher.py | 52----------------------------------------------------
Desync/watcher_base.py | 18------------------
Desync/watchman_watcher.py | 99-------------------------------------------------------------------------------
Ago.mod | 41+++++++++++++++++++++++++++++++++++++++++
Ago.sum | 93+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aintegration_test.go | 150+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/config/config.go | 212+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/config/config_test.go | 370+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/logger/logger.go | 128+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/logger/logger_test.go | 200+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/syncer/syncer.go | 249+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/syncer/syncer_test.go | 340+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/tui/app.go | 161+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/tui/dashboard.go | 272+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/tui/logview.go | 202+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/tui/styles.go | 18++++++++++++++++++
Ainternal/watcher/watcher.go | 222+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ainternal/watcher/watcher_test.go | 112+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Amain.go | 7+++++++
Dpyproject.toml | 20--------------------
Mreadme.md | 512+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
Duv.lock | 452-------------------------------------------------------------------------------
39 files changed, 7638 insertions(+), 1723 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml @@ -0,0 +1,24 @@ +name: CI + +on: + pull_request: + branches: [main] + +jobs: + test: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - uses: actions/setup-go@v5 + with: + go-version-file: go.mod + + - name: Vet + run: go vet ./... + + - name: Test + run: go test ./... -v + + - name: Build + run: go build ./... diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml @@ -0,0 +1,56 @@ +name: Release + +on: + push: + tags: + - "v*" + +permissions: + contents: write + +jobs: + build: + runs-on: ubuntu-latest + strategy: + matrix: + include: + - goos: darwin + goarch: arm64 + suffix: darwin-arm64 + - goos: linux + goarch: amd64 + suffix: linux-amd64 + steps: + - uses: actions/checkout@v4 + + - uses: actions/setup-go@v5 + with: + go-version-file: go.mod + + - name: Build + env: + GOOS: ${{ matrix.goos }} + GOARCH: ${{ matrix.goarch }} + CGO_ENABLED: "0" + run: go build -ldflags="-s -w" -o esync-${{ matrix.suffix }} . + + - name: Upload artifact + uses: actions/upload-artifact@v4 + with: + name: esync-${{ matrix.suffix }} + path: esync-${{ matrix.suffix }} + + release: + needs: build + runs-on: ubuntu-latest + steps: + - name: Download all artifacts + uses: actions/download-artifact@v4 + + - name: Create release + uses: softprops/action-gh-release@v2 + with: + generate_release_notes: true + files: | + esync-darwin-arm64/esync-darwin-arm64 + esync-linux-amd64/esync-linux-amd64 diff --git a/.gitignore b/.gitignore @@ -14,6 +14,9 @@ wheels/ # Virtual environments .venv +# Compiled binary +esync + # temporary ignores test-sync tests diff --git a/.python-version b/.python-version @@ -1 +0,0 @@ -3.9 diff --git a/cmd/check.go b/cmd/check.go @@ -0,0 +1,219 @@ +package cmd + +import ( + "fmt" + "os" + "path/filepath" + "strings" + + "github.com/charmbracelet/lipgloss" + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" +) + +// --------------------------------------------------------------------------- +// Styles +// --------------------------------------------------------------------------- + +var ( + greenHeader = lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("10")) + yellowHeader = lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("11")) + dimText = lipgloss.NewStyle().Foreground(lipgloss.Color("8")) +) + +// --------------------------------------------------------------------------- +// Command +// --------------------------------------------------------------------------- + +var checkCmd = &cobra.Command{ + Use: "check", + Short: "Validate config and preview included/excluded files", + Long: "Load the esync configuration, walk the local directory, and show which files would be included or excluded by the ignore patterns.", + RunE: runCheck, +} + +func init() { + rootCmd.AddCommand(checkCmd) +} + +// --------------------------------------------------------------------------- +// Run +// --------------------------------------------------------------------------- + +func runCheck(cmd *cobra.Command, args []string) error { + cfg, err := loadConfig() + if err != nil { + return err + } + return printPreview(cfg) +} + +// --------------------------------------------------------------------------- +// Shared: loadConfig +// --------------------------------------------------------------------------- + +// loadConfig loads configuration from the -c flag or auto-detects it. +func loadConfig() (*config.Config, error) { + path := cfgFile + if path == "" { + path = config.FindConfigFile() + } + if path == "" { + return nil, fmt.Errorf("no config file found; use -c to specify one, or run `esync init`") + } + cfg, err := config.Load(path) + if err != nil { + return nil, fmt.Errorf("loading config %s: %w", path, err) + } + return cfg, nil +} + +// --------------------------------------------------------------------------- +// Shared: printPreview +// --------------------------------------------------------------------------- + +// fileEntry records a file path and (for excluded files) the rule that matched. +type fileEntry struct { + path string + rule string +} + +// printPreview walks the local directory and displays included/excluded files. +func printPreview(cfg *config.Config) error { + localDir := cfg.Sync.Local + patterns := cfg.AllIgnorePatterns() + + var included []fileEntry + var excluded []fileEntry + var includedSize int64 + + err := filepath.Walk(localDir, func(path string, info os.FileInfo, err error) error { + if err != nil { + return nil // skip unreadable entries + } + + rel, err := filepath.Rel(localDir, path) + if err != nil { + return nil + } + + // Skip the root directory itself + if rel == "." { + return nil + } + + // Check against ignore patterns + for _, pattern := range patterns { + if matchesIgnorePattern(rel, info, pattern) { + excluded = append(excluded, fileEntry{path: rel, rule: pattern}) + if info.IsDir() { + return filepath.SkipDir + } + return nil + } + } + + if !info.IsDir() { + included = append(included, fileEntry{path: rel}) + includedSize += info.Size() + } + return nil + }) + if err != nil { + return fmt.Errorf("walking %s: %w", localDir, err) + } + + // --- Config summary --- + fmt.Println() + fmt.Printf(" Local: %s\n", cfg.Sync.Local) + fmt.Printf(" Remote: %s\n", cfg.Sync.Remote) + fmt.Println() + + // --- Included files --- + fmt.Println(greenHeader.Render(" Included files:")) + limit := 10 + for i, f := range included { + if i >= limit { + fmt.Printf(" ... %d more files\n", len(included)-limit) + break + } + fmt.Printf(" %s\n", f.path) + } + if len(included) == 0 { + fmt.Println(" (none)") + } + fmt.Println() + + // --- Excluded files --- + fmt.Println(yellowHeader.Render(" Excluded files:")) + for i, f := range excluded { + if i >= limit { + fmt.Printf(" ... %d more excluded\n", len(excluded)-limit) + break + } + fmt.Printf(" %-40s %s\n", f.path, dimText.Render("← "+f.rule)) + } + if len(excluded) == 0 { + fmt.Println(" (none)") + } + fmt.Println() + + // --- Totals --- + totals := fmt.Sprintf(" %d files included (%s) | %d excluded", + len(included), formatSize(includedSize), len(excluded)) + fmt.Println(dimText.Render(totals)) + fmt.Println() + + return nil +} + +// --------------------------------------------------------------------------- +// Pattern matching +// --------------------------------------------------------------------------- + +// matchesIgnorePattern checks whether a file (given its relative path and +// file info) matches a single ignore pattern. It handles bracket/quote +// stripping, ** prefixes, and directory-specific patterns. +func matchesIgnorePattern(rel string, info os.FileInfo, pattern string) bool { + // Strip surrounding quotes and brackets + pattern = strings.Trim(pattern, `"'`) + pattern = strings.Trim(pattern, "[]") + pattern = strings.TrimSpace(pattern) + + if pattern == "" { + return false + } + + // Check if this is a directory-only pattern (ends with /) + dirOnly := strings.HasSuffix(pattern, "/") + cleanPattern := strings.TrimSuffix(pattern, "/") + + // Strip **/ prefix for simpler matching + cleanPattern = strings.TrimPrefix(cleanPattern, "**/") + + if dirOnly && !info.IsDir() { + return false + } + + baseName := filepath.Base(rel) + + // Match against base name + if matched, _ := filepath.Match(cleanPattern, baseName); matched { + return true + } + + // Match against full relative path + if matched, _ := filepath.Match(cleanPattern, rel); matched { + return true + } + + // For directory patterns, also try matching directory components + if info.IsDir() { + if matched, _ := filepath.Match(cleanPattern, baseName); matched { + return true + } + } + + return false +} diff --git a/cmd/edit.go b/cmd/edit.go @@ -0,0 +1,101 @@ +package cmd + +import ( + "bufio" + "fmt" + "os" + "os/exec" + "strings" + + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" +) + +// --------------------------------------------------------------------------- +// Command +// --------------------------------------------------------------------------- + +var editCmd = &cobra.Command{ + Use: "edit", + Short: "Open config in $EDITOR, then validate and preview", + Long: "Open the esync configuration file in your editor. After saving, the config is validated and a file preview is shown.", + RunE: runEdit, +} + +func init() { + rootCmd.AddCommand(editCmd) +} + +// --------------------------------------------------------------------------- +// Run +// --------------------------------------------------------------------------- + +func runEdit(cmd *cobra.Command, args []string) error { + // 1. Find config file + path := cfgFile + if path == "" { + path = config.FindConfigFile() + } + if path == "" { + fmt.Fprintln(os.Stderr, "No config file found. Run `esync init` first to create one.") + return nil + } + + // 2. Determine editor + editor := os.Getenv("EDITOR") + if editor == "" { + editor = "vi" + } + + reader := bufio.NewReader(os.Stdin) + + // 3. Edit loop + for { + // Open editor + editorCmd := exec.Command(editor, path) + editorCmd.Stdin = os.Stdin + editorCmd.Stdout = os.Stdout + editorCmd.Stderr = os.Stderr + + if err := editorCmd.Run(); err != nil { + return fmt.Errorf("editor exited with error: %w", err) + } + + // Validate config + cfg, err := config.Load(path) + if err != nil { + fmt.Fprintf(os.Stderr, "\nConfig error: %s\n", err) + fmt.Print("Press 'e' to edit again, or 'q' to cancel: ") + answer, _ := reader.ReadString('\n') + answer = strings.TrimSpace(strings.ToLower(answer)) + if answer == "q" { + fmt.Println("Cancelled.") + return nil + } + continue + } + + // Valid config — show preview + if err := printPreview(cfg); err != nil { + return err + } + + // Ask user what to do + fmt.Print("Press Enter to accept, 'e' to edit again, or 'q' to cancel: ") + answer, _ := reader.ReadString('\n') + answer = strings.TrimSpace(strings.ToLower(answer)) + + switch answer { + case "q": + fmt.Println("Cancelled.") + return nil + case "e": + continue + default: + // Enter or anything else: accept + fmt.Println("Config saved.") + return nil + } + } +} diff --git a/cmd/init.go b/cmd/init.go @@ -0,0 +1,237 @@ +package cmd + +import ( + "bufio" + "fmt" + "os" + "strings" + + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" +) + +// --------------------------------------------------------------------------- +// Default patterns (already present in DefaultTOML) +// --------------------------------------------------------------------------- + +// defaultIgnorePatterns lists patterns that DefaultTOML() already includes +// in settings.ignore, so we can skip them when merging from .gitignore. +var defaultIgnorePatterns = map[string]bool{ + ".git": true, + ".git/": true, + "node_modules": true, + "node_modules/": true, + ".DS_Store": true, +} + +// commonDirs lists directories to auto-detect and exclude. +var commonDirs = []string{ + ".git", + "node_modules", + "__pycache__", + "build", + ".venv", + "dist", + ".tox", + ".mypy_cache", +} + +// --------------------------------------------------------------------------- +// Flags +// --------------------------------------------------------------------------- + +var initRemote string + +// --------------------------------------------------------------------------- +// Command +// --------------------------------------------------------------------------- + +var initCmd = &cobra.Command{ + Use: "init", + Short: "Generate an esync.toml configuration file", + Long: "Inspect the current directory to generate a smart esync.toml with .gitignore import and common directory exclusion.", + RunE: runInit, +} + +func init() { + initCmd.Flags().StringVarP(&initRemote, "remote", "r", "", "pre-fill remote destination") + rootCmd.AddCommand(initCmd) +} + +// --------------------------------------------------------------------------- +// Main logic +// --------------------------------------------------------------------------- + +func runInit(cmd *cobra.Command, args []string) error { + // 1. Determine output path + outPath := cfgFile + if outPath == "" { + outPath = "./esync.toml" + } + + // 2. If file exists, prompt for overwrite confirmation + if _, err := os.Stat(outPath); err == nil { + fmt.Printf("File %s already exists. Overwrite? [y/N] ", outPath) + reader := bufio.NewReader(os.Stdin) + answer, _ := reader.ReadString('\n') + answer = strings.TrimSpace(strings.ToLower(answer)) + if answer != "y" && answer != "yes" { + fmt.Println("Aborted.") + return nil + } + } + + // 3. Start with default TOML content + content := config.DefaultTOML() + + // 4. Read .gitignore patterns + gitignorePatterns := readGitignore() + + // 5. Detect common directories that exist and aren't already in defaults + detectedDirs := detectCommonDirs() + + // 6. Remote destination: use flag or prompt + remote := initRemote + if remote == "" { + fmt.Print("Remote destination (e.g. user@host:/path/to/dest): ") + reader := bufio.NewReader(os.Stdin) + line, _ := reader.ReadString('\n') + remote = strings.TrimSpace(line) + } + + // Replace remote in TOML content if provided + if remote != "" { + content = strings.Replace( + content, + `remote = "user@host:/path/to/dest"`, + fmt.Sprintf(`remote = %q`, remote), + 1, + ) + } + + // 7. Merge extra ignore patterns into TOML content + var extraPatterns []string + extraPatterns = append(extraPatterns, gitignorePatterns...) + extraPatterns = append(extraPatterns, detectedDirs...) + + // Deduplicate: remove any that are already in defaults or duplicated + seen := make(map[string]bool) + for k := range defaultIgnorePatterns { + seen[k] = true + } + var uniqueExtras []string + for _, p := range extraPatterns { + // Normalize: strip trailing slash for comparison + normalized := strings.TrimSuffix(p, "/") + if seen[normalized] || seen[normalized+"/"] || seen[p] { + continue + } + seen[normalized] = true + seen[normalized+"/"] = true + uniqueExtras = append(uniqueExtras, p) + } + + if len(uniqueExtras) > 0 { + // Build the new ignore list: default patterns + extras + var quoted []string + // Start with the defaults already in the TOML + for _, d := range []string{".git", "node_modules", ".DS_Store"} { + quoted = append(quoted, fmt.Sprintf("%q", d)) + } + for _, p := range uniqueExtras { + quoted = append(quoted, fmt.Sprintf("%q", p)) + } + newIgnoreLine := "ignore = [" + strings.Join(quoted, ", ") + "]" + content = strings.Replace( + content, + `ignore = [".git", "node_modules", ".DS_Store"]`, + newIgnoreLine, + 1, + ) + } + + // 8. Write to file + if err := os.WriteFile(outPath, []byte(content), 0644); err != nil { + return fmt.Errorf("writing config file: %w", err) + } + + // 9. Print summary + fmt.Println() + fmt.Printf("Created %s\n", outPath) + fmt.Println() + if len(gitignorePatterns) > 0 { + fmt.Printf(" Imported %d pattern(s) from .gitignore\n", len(gitignorePatterns)) + } + if len(detectedDirs) > 0 { + fmt.Printf(" Auto-excluded %d common dir(s): %s\n", + len(detectedDirs), strings.Join(detectedDirs, ", ")) + } + if len(uniqueExtras) > 0 { + fmt.Printf(" Total extra ignore patterns: %d\n", len(uniqueExtras)) + } + fmt.Println() + fmt.Println("Next steps:") + fmt.Println(" esync check — validate your configuration") + fmt.Println(" esync edit — open the config in your editor") + + return nil +} + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +// readGitignore reads .gitignore in the current directory and returns +// patterns, skipping comments, empty lines, and patterns already present +// in the default ignore list. +func readGitignore() []string { + f, err := os.Open(".gitignore") + if err != nil { + return nil + } + defer f.Close() + + var patterns []string + scanner := bufio.NewScanner(f) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + + // Skip empty lines and comments + if line == "" || strings.HasPrefix(line, "#") { + continue + } + + // Skip patterns already in the defaults + normalized := strings.TrimSuffix(line, "/") + if defaultIgnorePatterns[line] || defaultIgnorePatterns[normalized] || defaultIgnorePatterns[normalized+"/"] { + continue + } + + patterns = append(patterns, line) + } + + return patterns +} + +// detectCommonDirs checks for common directories that should typically be +// excluded, returns the ones that exist on disk and aren't already in the +// default ignore list. +func detectCommonDirs() []string { + var found []string + for _, dir := range commonDirs { + // Skip if already in defaults + if defaultIgnorePatterns[dir] || defaultIgnorePatterns[dir+"/"] { + continue + } + + // Check if directory actually exists + info, err := os.Stat(dir) + if err != nil || !info.IsDir() { + continue + } + + found = append(found, dir) + } + return found +} diff --git a/cmd/root.go b/cmd/root.go @@ -0,0 +1,27 @@ +package cmd + +import ( + "fmt" + "os" + + "github.com/spf13/cobra" +) + +var cfgFile string + +var rootCmd = &cobra.Command{ + Use: "esync", + Short: "File synchronization tool using rsync", + Long: "A file sync tool that watches for changes and automatically syncs them to a remote destination using rsync.", +} + +func Execute() { + if err := rootCmd.Execute(); err != nil { + fmt.Fprintln(os.Stderr, err) + os.Exit(1) + } +} + +func init() { + rootCmd.PersistentFlags().StringVarP(&cfgFile, "config", "c", "", "config file path") +} diff --git a/cmd/status.go b/cmd/status.go @@ -0,0 +1,66 @@ +package cmd + +import ( + "fmt" + "os" + "path/filepath" + "strconv" + "strings" + "syscall" + + "github.com/spf13/cobra" +) + +// --------------------------------------------------------------------------- +// Command +// --------------------------------------------------------------------------- + +var statusCmd = &cobra.Command{ + Use: "status", + Short: "Check if an esync daemon is running", + Long: "Read the PID file and report whether an esync daemon process is currently alive.", + RunE: runStatus, +} + +func init() { + rootCmd.AddCommand(statusCmd) +} + +// --------------------------------------------------------------------------- +// Run +// --------------------------------------------------------------------------- + +func runStatus(cmd *cobra.Command, args []string) error { + pidPath := filepath.Join(os.TempDir(), "esync.pid") + + data, err := os.ReadFile(pidPath) + if err != nil { + if os.IsNotExist(err) { + fmt.Println("No esync daemon running.") + return nil + } + return fmt.Errorf("reading PID file: %w", err) + } + + pid, err := strconv.Atoi(strings.TrimSpace(string(data))) + if err != nil { + return fmt.Errorf("invalid PID file content: %w", err) + } + + process, err := os.FindProcess(pid) + if err != nil { + fmt.Println("No esync daemon running (stale PID file).") + os.Remove(pidPath) + return nil + } + + // Signal 0 checks whether the process is alive without actually sending a signal. + if err := process.Signal(syscall.Signal(0)); err != nil { + fmt.Println("No esync daemon running (stale PID file).") + os.Remove(pidPath) + return nil + } + + fmt.Printf("esync daemon running (PID %d)\n", pid) + return nil +} diff --git a/cmd/sync.go b/cmd/sync.go @@ -0,0 +1,336 @@ +package cmd + +import ( + "fmt" + "os" + "os/signal" + "path/filepath" + "syscall" + "time" + + tea "github.com/charmbracelet/bubbletea" + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" + "github.com/eloualiche/esync/internal/logger" + "github.com/eloualiche/esync/internal/syncer" + "github.com/eloualiche/esync/internal/tui" + "github.com/eloualiche/esync/internal/watcher" +) + +// --------------------------------------------------------------------------- +// Flags +// --------------------------------------------------------------------------- + +var ( + localPath string + remotePath string + daemon bool + dryRun bool + initialSync bool + verbose bool +) + +// --------------------------------------------------------------------------- +// Command +// --------------------------------------------------------------------------- + +var syncCmd = &cobra.Command{ + Use: "sync", + Short: "Watch and sync files to a remote destination", + Long: "Watch a local directory for changes and automatically sync them to a remote destination using rsync.", + RunE: runSync, +} + +func init() { + syncCmd.Flags().StringVarP(&localPath, "local", "l", "", "local path to watch") + syncCmd.Flags().StringVarP(&remotePath, "remote", "r", "", "remote destination path") + syncCmd.Flags().BoolVar(&daemon, "daemon", false, "run in daemon mode (no TUI)") + syncCmd.Flags().BoolVar(&dryRun, "dry-run", false, "show what would be synced without syncing") + syncCmd.Flags().BoolVar(&initialSync, "initial-sync", false, "force a full sync on startup") + syncCmd.Flags().BoolVarP(&verbose, "verbose", "v", false, "verbose output") + + rootCmd.AddCommand(syncCmd) +} + +// --------------------------------------------------------------------------- +// Config loading +// --------------------------------------------------------------------------- + +// loadOrBuildConfig resolves configuration from CLI flags, a config file, or +// builds a minimal config in memory when --local and --remote are both given. +func loadOrBuildConfig() (*config.Config, error) { + // 1. Explicit config file via -c flag + if cfgFile != "" { + cfg, err := config.Load(cfgFile) + if err != nil { + return nil, fmt.Errorf("loading config %s: %w", cfgFile, err) + } + applyCLIOverrides(cfg) + return cfg, nil + } + + // 2. Quick mode: both --local and --remote provided + if localPath != "" && remotePath != "" { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: localPath, + Remote: remotePath, + Interval: 1, + }, + Settings: config.Settings{ + WatcherDebounce: 500, + InitialSync: initialSync, + Ignore: []string{".git", "node_modules", ".DS_Store"}, + Rsync: config.RsyncSettings{ + Archive: true, + Compress: true, + }, + }, + } + return cfg, nil + } + + // 3. Auto-detect config file + path := config.FindConfigFile() + if path == "" { + return nil, fmt.Errorf("no config file found; use -c, or provide both -l and -r") + } + + cfg, err := config.Load(path) + if err != nil { + return nil, fmt.Errorf("loading config %s: %w", path, err) + } + applyCLIOverrides(cfg) + return cfg, nil +} + +// applyCLIOverrides applies command-line flag values onto a loaded config. +func applyCLIOverrides(cfg *config.Config) { + if localPath != "" { + cfg.Sync.Local = localPath + } + if remotePath != "" { + cfg.Sync.Remote = remotePath + } + if initialSync { + cfg.Settings.InitialSync = true + } +} + +// --------------------------------------------------------------------------- +// Run entry point +// --------------------------------------------------------------------------- + +func runSync(cmd *cobra.Command, args []string) error { + cfg, err := loadOrBuildConfig() + if err != nil { + return err + } + + s := syncer.New(cfg) + s.DryRun = dryRun + + // Optional initial sync + if cfg.Settings.InitialSync { + if verbose { + fmt.Println("Running initial sync...") + } + result, err := s.Run() + if err != nil { + fmt.Fprintf(os.Stderr, "Initial sync error: %s\n", result.ErrorMessage) + } else if verbose { + fmt.Printf("Initial sync complete: %d files, %s\n", result.FilesCount, formatSize(result.BytesTotal)) + } + } + + if daemon { + return runDaemon(cfg, s) + } + return runTUI(cfg, s) +} + +// --------------------------------------------------------------------------- +// TUI mode +// --------------------------------------------------------------------------- + +func runTUI(cfg *config.Config, s *syncer.Syncer) error { + app := tui.NewApp(cfg.Sync.Local, cfg.Sync.Remote) + syncCh := app.SyncEventChan() + + handler := func() { + // Send a "syncing" event before starting + syncCh <- tui.SyncEvent{ + File: cfg.Sync.Local, + Status: "syncing", + Time: time.Now(), + } + + result, err := s.Run() + now := time.Now() + + if err != nil { + syncCh <- tui.SyncEvent{ + File: cfg.Sync.Local, + Status: "error", + Time: now, + } + return + } + + // Send individual file events + for _, f := range result.Files { + syncCh <- tui.SyncEvent{ + File: f, + Size: formatSize(result.BytesTotal), + Duration: result.Duration, + Status: "synced", + Time: now, + } + } + + // If no individual files reported, send a summary event + if len(result.Files) == 0 && result.FilesCount > 0 { + syncCh <- tui.SyncEvent{ + File: fmt.Sprintf("%d files", result.FilesCount), + Size: formatSize(result.BytesTotal), + Duration: result.Duration, + Status: "synced", + Time: now, + } + } + } + + w, err := watcher.New( + cfg.Sync.Local, + cfg.Settings.WatcherDebounce, + cfg.AllIgnorePatterns(), + handler, + ) + if err != nil { + return fmt.Errorf("creating watcher: %w", err) + } + + if err := w.Start(); err != nil { + return fmt.Errorf("starting watcher: %w", err) + } + + p := tea.NewProgram(app, tea.WithAltScreen()) + if _, err := p.Run(); err != nil { + w.Stop() + return fmt.Errorf("TUI error: %w", err) + } + + w.Stop() + return nil +} + +// --------------------------------------------------------------------------- +// Daemon mode +// --------------------------------------------------------------------------- + +func runDaemon(cfg *config.Config, s *syncer.Syncer) error { + // Write PID file so `esync status` can find us + pidPath := filepath.Join(os.TempDir(), "esync.pid") + os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", os.Getpid())), 0644) + defer os.Remove(pidPath) + + var log *logger.Logger + if cfg.Settings.Log.File != "" { + var err error + log, err = logger.New(cfg.Settings.Log.File, cfg.Settings.Log.Format) + if err != nil { + return fmt.Errorf("creating logger: %w", err) + } + defer log.Close() + } + + fmt.Printf("esync daemon started (PID %d)\n", os.Getpid()) + fmt.Printf("Watching: %s -> %s\n", cfg.Sync.Local, cfg.Sync.Remote) + + if log != nil { + log.Info("started", map[string]interface{}{ + "local": cfg.Sync.Local, + "remote": cfg.Sync.Remote, + "pid": os.Getpid(), + }) + } + + handler := func() { + result, err := s.Run() + + if err != nil { + msg := result.ErrorMessage + if verbose { + fmt.Fprintf(os.Stderr, "Sync error: %s\n", msg) + } + if log != nil { + log.Error("sync_failed", map[string]interface{}{ + "error": msg, + }) + } + // Terminal bell on error + fmt.Print("\a") + return + } + + if verbose { + fmt.Printf("Synced %d files (%s) in %s\n", + result.FilesCount, + formatSize(result.BytesTotal), + result.Duration.Truncate(time.Millisecond), + ) + } + if log != nil { + log.Info("sync_complete", map[string]interface{}{ + "files": result.FilesCount, + "bytes": result.BytesTotal, + "duration": result.Duration.String(), + }) + } + } + + w, err := watcher.New( + cfg.Sync.Local, + cfg.Settings.WatcherDebounce, + cfg.AllIgnorePatterns(), + handler, + ) + if err != nil { + return fmt.Errorf("creating watcher: %w", err) + } + + if err := w.Start(); err != nil { + return fmt.Errorf("starting watcher: %w", err) + } + defer w.Stop() + + // Block until SIGINT or SIGTERM + sigCh := make(chan os.Signal, 1) + signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM) + <-sigCh + + if log != nil { + log.Info("stopping", nil) + } + fmt.Println("\nesync daemon stopped.") + return nil +} + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +// formatSize converts a byte count to a human-readable string (B, KB, MB, GB). +func formatSize(bytes int64) string { + switch { + case bytes >= 1<<30: + return fmt.Sprintf("%.1fGB", float64(bytes)/float64(1<<30)) + case bytes >= 1<<20: + return fmt.Sprintf("%.1fMB", float64(bytes)/float64(1<<20)) + case bytes >= 1<<10: + return fmt.Sprintf("%.1fKB", float64(bytes)/float64(1<<10)) + default: + return fmt.Sprintf("%dB", bytes) + } +} diff --git a/docs/plans/2026-03-01-go-rewrite-design.md b/docs/plans/2026-03-01-go-rewrite-design.md @@ -0,0 +1,262 @@ +# esync Go Rewrite — Design Document + +Date: 2026-03-01 + +## Motivation + +Rewrite esync from Python to Go for three equal priorities: + +1. **Single binary distribution** — no Python/pip dependency; download and run +2. **Performance** — faster startup, lower memory, better for long-running watch processes +3. **Better TUI** — polished interactive dashboard using Bubbletea/Lipgloss + +## Technology Stack + +| Component | Library | Purpose | +|-----------|---------|---------| +| CLI framework | [Cobra](https://github.com/spf13/cobra) | Subcommands, flags, help generation | +| Configuration | [Viper](https://github.com/spf13/viper) | TOML loading, config file search path, env vars | +| TUI framework | [Bubbletea](https://github.com/charmbracelet/bubbletea) | Interactive terminal UI | +| TUI styling | [Lipgloss](https://github.com/charmbracelet/lipgloss) | Borders, colors, layout | +| File watching | [fsnotify](https://github.com/fsnotify/fsnotify) | Cross-platform filesystem events | +| Sync engine | rsync (external) | File transfer via subprocess | + +## Project Structure + +``` +esync/ +├── cmd/ +│ ├── root.go # Root command, global flags +│ ├── sync.go # esync sync — main watch+sync command +│ ├── init.go # esync init — smart config generation +│ ├── check.go # esync check — validate config + preview +│ ├── edit.go # esync edit — open in $EDITOR + preview +│ └── status.go # esync status — check running daemon +├── internal/ +│ ├── config/ # TOML config models, loading, validation +│ ├── watcher/ # fsnotify wrapper with debouncing +│ ├── syncer/ # rsync command builder and executor +│ ├── tui/ # Bubbletea models, views, styles +│ │ ├── app.go # Root Bubbletea model +│ │ ├── dashboard.go # Main dashboard view +│ │ ├── logview.go # Scrollable log view +│ │ └── styles.go # Lipgloss style definitions +│ └── logger/ # Structured logging (file + JSON) +├── main.go # Entry point +├── esync.toml # Example config +└── go.mod +``` + +## CLI Commands + +``` +esync sync [flags] Start watching and syncing + -c, --config PATH Config file path + -l, --local PATH Override local path + -r, --remote PATH Override remote path + --daemon Run without TUI, log to file + --dry-run Show what would sync without executing + --initial-sync Force full sync on startup + -v, --verbose Verbose output + +esync init [flags] Generate config from current directory + -c, --config PATH Output path (default: ./esync.toml) + -r, --remote PATH Pre-fill remote destination + +esync check [flags] Validate config and show file include/exclude preview + -c, --config PATH Config file path + +esync edit [flags] Open config in $EDITOR, then show preview + -c, --config PATH Config file path + +esync status Check if daemon is running, show last sync info +``` + +### Quick usage (no config file) + +`esync sync -l ./src -r user@host:/deploy` works without a config file when both local and remote are provided as flags. + +## Configuration + +### Format + +TOML. Search order: + +1. `-c` / `--config` flag +2. `./esync.toml` +3. `~/.config/esync/config.toml` +4. `/etc/esync/config.toml` + +### Schema + +```toml +[sync] +local = "./src" +remote = "user@host:/deploy/src" +interval = 1 # debounce interval in seconds + +[sync.ssh] +host = "example.com" +user = "deploy" +port = 22 +identity_file = "~/.ssh/id_ed25519" +interactive_auth = true # for 2FA prompts + +[settings] +watcher_debounce = 500 # ms, batch rapid changes +initial_sync = true # full rsync on startup +ignore = ["*.log", "*.tmp", ".env"] + +[settings.rsync] +archive = true +compress = true +backup = true +backup_dir = ".rsync_backup" +progress = true +extra_args = ["--delete"] # pass-through for any rsync flags +ignore = [".git/", "node_modules/", "**/__pycache__/"] + +[settings.log] +file = "~/.local/share/esync/esync.log" +format = "json" # "json" or "text" +``` + +### Smart init + +`esync init` inspects the current directory: + +- Detects `.gitignore` and imports patterns into `settings.rsync.ignore` +- Auto-excludes common directories (`.git/`, `node_modules/`, `__pycache__/`, `build/`, `.venv/`) +- Pre-fills `sync.local` with `.` +- Accepts `-r` flag or prompts for remote destination +- Shows `esync check` preview after generating + +## TUI Design + +### Main dashboard view + +``` + esync ───────────────────────────────────────── + ./src → user@host:/deploy/src + ● Watching (synced 3s ago) + + Recent ────────────────────────────────────── + ✓ src/main.go 2.1KB 0.3s + ✓ src/config.go 1.4KB 0.2s + ⟳ src/handler.go syncing... + ✓ src/util.go 890B 0.1s + + Stats ─────────────────────────────────────── + 142 synced │ 3.2MB total │ 0 errors + + q quit p pause r full resync l logs d dry-run / filter +``` + +### Log view (toggle with `l`) + +``` + esync ─ logs ────────────────────────────────── + 14:23:01 INF synced src/main.go (2.1KB, 0.3s) + 14:23:03 INF synced src/config.go (1.4KB, 0.2s) + 14:23:05 INF syncing src/handler.go... + 14:23:06 INF synced src/handler.go (890B, 0.4s) + 14:23:06 WRN permission denied: .env (skipped) + 14:23:15 INF idle, watching for changes + + ↑↓ scroll / filter l back q quit +``` + +### Keyboard shortcuts + +| Key | Action | +|-----|--------| +| `q` | Quit | +| `p` | Pause/resume watching | +| `r` | Trigger full resync | +| `l` | Toggle log view | +| `d` | Dry-run next sync | +| `/` | Filter file list / log entries | + +### Styling + +Lipgloss with a subtle color palette: +- Green: success/synced +- Yellow: in-progress/syncing +- Red: errors +- Dim: timestamps, stats + +Clean and minimal — not flashy. + +## Runtime Modes + +### Interactive (default) + +`esync sync` launches the Bubbletea TUI dashboard. All events render live. + +### Daemon + +`esync sync --daemon` runs without TUI: +- Writes JSON lines to log file +- Prints PID on startup +- Terminal bell on sync errors + +Log format: +```json +{"time":"14:23:01","level":"info","event":"synced","file":"src/main.go","size":2150,"duration_ms":300} +{"time":"14:23:06","level":"warn","event":"skipped","file":".env","reason":"permission denied"} +``` + +Check with `esync status`: +``` +esync daemon running (PID 42351) + Watching: ./src → user@host:/deploy/src + Last sync: 3s ago (src/main.go) + Session: 142 files synced, 0 errors +``` + +## Data Flow + +``` +fsnotify event + → debouncer (batches events over configurable window, default 500ms) + → syncer (builds rsync command, executes) + → result (parsed rsync output: files, sizes, duration, errors) + → TUI update OR log write +``` + +## Features + +### Carried from Python version +- File watching with configurable ignore patterns +- rsync-based sync with SSH support +- TOML configuration with search path +- Archive, compress, backup options +- SSH authentication (key, password, interactive/2FA) +- CLI flag overrides for local/remote paths + +### New in Go version +- **Debouncing** — batch rapid file changes into single rsync call +- **Initial sync on start** — optional full rsync before entering watch mode +- **Dry-run mode** — `--dry-run` flag and `d` key in TUI +- **Daemon mode** — `--daemon` with JSON log output and PID tracking +- **`esync status`** — check running daemon state +- **`esync check`** — validate config and show file include/exclude preview +- **`esync edit`** — open config in `$EDITOR`, then show preview +- **Smart `esync init`** — generate config from current directory, import .gitignore +- **rsync extra_args** — pass-through for arbitrary rsync flags +- **Pause/resume** — `p` key in TUI +- **Scrollable log view** — `l` key with `/` filter +- **SSH ControlMaster** — keep SSH connections alive between syncs +- **Sync sound** — terminal bell on errors +- **File filter in TUI** — `/` to search recent events and logs + +### Dropped from Python version +- Watchman backend (fsnotify only) +- YAML dependency +- Dual watcher abstraction layer + +## System Requirements + +- Go 1.22+ (build time only) +- rsync 3.x +- macOS / Linux (fsnotify supports both) diff --git a/docs/plans/2026-03-01-go-rewrite-plan.md b/docs/plans/2026-03-01-go-rewrite-plan.md @@ -0,0 +1,2924 @@ +# esync Go Rewrite — Implementation Plan + +> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. + +**Goal:** Rewrite esync from Python to Go with a Bubbletea TUI, Cobra CLI, and Viper-based TOML configuration. + +**Architecture:** Cobra CLI dispatches to subcommands. `esync sync` launches either a Bubbletea TUI (default) or daemon mode. fsnotify watches files, debouncer batches events, syncer executes rsync. Viper loads TOML config with a search path. + +**Tech Stack:** Go 1.22+, Cobra, Viper, Bubbletea, Lipgloss, fsnotify, rsync (external) + +--- + +### Task 1: Project Scaffolding + +**Files:** +- Create: `main.go` +- Create: `go.mod` +- Create: `cmd/root.go` +- Create: `internal/config/config.go` +- Create: `internal/syncer/syncer.go` +- Create: `internal/watcher/watcher.go` +- Create: `internal/tui/app.go` +- Create: `internal/logger/logger.go` + +**Step 1: Remove Python source files** + +Delete the Python package and build files (we're on a feature branch): +```bash +rm -rf esync/ pyproject.toml uv.lock .python-version +``` + +**Step 2: Initialize Go module** + +```bash +go mod init github.com/eloualiche/esync +``` + +**Step 3: Create directory structure** + +```bash +mkdir -p cmd internal/config internal/syncer internal/watcher internal/tui internal/logger +``` + +**Step 4: Create minimal main.go** + +```go +package main + +import "github.com/eloualiche/esync/cmd" + +func main() { + cmd.Execute() +} +``` + +**Step 5: Create root command stub** + +```go +// cmd/root.go +package cmd + +import ( + "fmt" + "os" + + "github.com/spf13/cobra" +) + +var cfgFile string + +var rootCmd = &cobra.Command{ + Use: "esync", + Short: "File synchronization tool using rsync", + Long: "A file sync tool that watches for changes and automatically syncs them to a remote destination using rsync.", +} + +func Execute() { + if err := rootCmd.Execute(); err != nil { + fmt.Fprintln(os.Stderr, err) + os.Exit(1) + } +} + +func init() { + rootCmd.PersistentFlags().StringVarP(&cfgFile, "config", "c", "", "config file path") +} +``` + +**Step 6: Install dependencies and verify build** + +```bash +go get github.com/spf13/cobra +go get github.com/spf13/viper +go get github.com/fsnotify/fsnotify +go get github.com/charmbracelet/bubbletea +go get github.com/charmbracelet/lipgloss +go mod tidy +go build ./... +``` + +**Step 7: Commit** + +```bash +git add -A +git commit -m "feat: scaffold Go project with Cobra root command" +``` + +--- + +### Task 2: Configuration Package + +**Files:** +- Create: `internal/config/config.go` +- Create: `internal/config/config_test.go` + +**Step 1: Write failing tests for config structs and loading** + +```go +// internal/config/config_test.go +package config + +import ( + "os" + "path/filepath" + "testing" +) + +func TestLoadConfig(t *testing.T) { + dir := t.TempDir() + tomlPath := filepath.Join(dir, "esync.toml") + + content := []byte(` +[sync] +local = "./src" +remote = "user@host:/deploy" +interval = 1 + +[settings] +watcher_debounce = 500 +initial_sync = true +ignore = ["*.log", "*.tmp"] + +[settings.rsync] +archive = true +compress = true +backup = true +backup_dir = ".rsync_backup" +progress = true +ignore = [".git/", "node_modules/"] + +[settings.log] +file = "/tmp/esync.log" +format = "json" +`) + os.WriteFile(tomlPath, content, 0644) + + cfg, err := Load(tomlPath) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if cfg.Sync.Local != "./src" { + t.Errorf("expected local=./src, got %s", cfg.Sync.Local) + } + if cfg.Sync.Remote != "user@host:/deploy" { + t.Errorf("expected remote=user@host:/deploy, got %s", cfg.Sync.Remote) + } + if cfg.Settings.WatcherDebounce != 500 { + t.Errorf("expected debounce=500, got %d", cfg.Settings.WatcherDebounce) + } + if !cfg.Settings.InitialSync { + t.Error("expected initial_sync=true") + } + if len(cfg.Settings.Ignore) != 2 { + t.Errorf("expected 2 ignore patterns, got %d", len(cfg.Settings.Ignore)) + } + if !cfg.Settings.Rsync.Archive { + t.Error("expected rsync archive=true") + } + if cfg.Settings.Log.Format != "json" { + t.Errorf("expected log format=json, got %s", cfg.Settings.Log.Format) + } +} + +func TestLoadConfigWithSSH(t *testing.T) { + dir := t.TempDir() + tomlPath := filepath.Join(dir, "esync.toml") + + content := []byte(` +[sync] +local = "./src" +remote = "/deploy" + +[sync.ssh] +host = "example.com" +user = "deploy" +port = 22 +identity_file = "~/.ssh/id_ed25519" +interactive_auth = true +`) + os.WriteFile(tomlPath, content, 0644) + + cfg, err := Load(tomlPath) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if cfg.Sync.SSH == nil { + t.Fatal("expected SSH config to be set") + } + if cfg.Sync.SSH.Host != "example.com" { + t.Errorf("expected host=example.com, got %s", cfg.Sync.SSH.Host) + } + if cfg.Sync.SSH.User != "deploy" { + t.Errorf("expected user=deploy, got %s", cfg.Sync.SSH.User) + } + if cfg.Sync.SSH.IdentityFile != "~/.ssh/id_ed25519" { + t.Errorf("expected identity_file, got %s", cfg.Sync.SSH.IdentityFile) + } +} + +func TestLoadConfigDefaults(t *testing.T) { + dir := t.TempDir() + tomlPath := filepath.Join(dir, "esync.toml") + + content := []byte(` +[sync] +local = "./src" +remote = "./dst" +`) + os.WriteFile(tomlPath, content, 0644) + + cfg, err := Load(tomlPath) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if cfg.Settings.WatcherDebounce != 500 { + t.Errorf("expected default debounce=500, got %d", cfg.Settings.WatcherDebounce) + } + if cfg.Settings.Rsync.Archive != true { + t.Error("expected default archive=true") + } +} + +func TestIsRemote(t *testing.T) { + tests := []struct { + remote string + want bool + }{ + {"user@host:/path", true}, + {"host:/path", true}, + {"./local/path", false}, + {"/absolute/path", false}, + {"C:/windows/path", false}, + } + for _, tt := range tests { + cfg := &Config{Sync: SyncSection{Remote: tt.remote}} + if got := cfg.IsRemote(); got != tt.want { + t.Errorf("IsRemote(%q) = %v, want %v", tt.remote, got, tt.want) + } + } +} + +func TestFindConfigFile(t *testing.T) { + dir := t.TempDir() + tomlPath := filepath.Join(dir, "esync.toml") + os.WriteFile(tomlPath, []byte("[sync]\nlocal = \".\"\nremote = \".\"\n"), 0644) + + found := FindConfigIn([]string{tomlPath}) + if found != tomlPath { + t.Errorf("expected %s, got %s", tomlPath, found) + } +} + +func TestFindConfigFileNotFound(t *testing.T) { + found := FindConfigIn([]string{"/nonexistent/esync.toml"}) + if found != "" { + t.Errorf("expected empty string, got %s", found) + } +} +``` + +**Step 2: Run tests to verify they fail** + +```bash +cd internal/config && go test -v +``` +Expected: compilation errors (types don't exist yet) + +**Step 3: Implement config package** + +```go +// internal/config/config.go +package config + +import ( + "fmt" + "os" + "path/filepath" + "regexp" + "strings" + + "github.com/spf13/viper" +) + +// SSHConfig holds SSH connection settings. +type SSHConfig struct { + Host string `mapstructure:"host"` + User string `mapstructure:"user"` + Port int `mapstructure:"port"` + IdentityFile string `mapstructure:"identity_file"` + InteractiveAuth bool `mapstructure:"interactive_auth"` +} + +// SyncSection holds source and destination paths. +type SyncSection struct { + Local string `mapstructure:"local"` + Remote string `mapstructure:"remote"` + Interval int `mapstructure:"interval"` + SSH *SSHConfig `mapstructure:"ssh"` +} + +// RsyncSettings holds rsync-specific options. +type RsyncSettings struct { + Archive bool `mapstructure:"archive"` + Compress bool `mapstructure:"compress"` + Backup bool `mapstructure:"backup"` + BackupDir string `mapstructure:"backup_dir"` + Progress bool `mapstructure:"progress"` + ExtraArgs []string `mapstructure:"extra_args"` + Ignore []string `mapstructure:"ignore"` +} + +// LogSettings holds logging configuration. +type LogSettings struct { + File string `mapstructure:"file"` + Format string `mapstructure:"format"` +} + +// Settings holds all application settings. +type Settings struct { + WatcherDebounce int `mapstructure:"watcher_debounce"` + InitialSync bool `mapstructure:"initial_sync"` + Ignore []string `mapstructure:"ignore"` + Rsync RsyncSettings `mapstructure:"rsync"` + Log LogSettings `mapstructure:"log"` +} + +// Config is the top-level configuration. +type Config struct { + Sync SyncSection `mapstructure:"sync"` + Settings Settings `mapstructure:"settings"` +} + +// IsRemote returns true if the remote target is an SSH destination. +func (c *Config) IsRemote() bool { + if c.Sync.SSH != nil && c.Sync.SSH.Host != "" { + return true + } + return isRemotePath(c.Sync.Remote) +} + +// isRemotePath checks if a path string looks like user@host:/path or host:/path. +func isRemotePath(path string) bool { + if len(path) >= 2 && path[1] == ':' && (path[0] >= 'A' && path[0] <= 'Z' || path[0] >= 'a' && path[0] <= 'z') { + return false // Windows drive letter + } + re := regexp.MustCompile(`^(?:[^@]+@)?[^/:]+:.+$`) + return re.MatchString(path) +} + +// AllIgnorePatterns returns combined ignore patterns from settings and rsync. +func (c *Config) AllIgnorePatterns() []string { + combined := make([]string, 0, len(c.Settings.Ignore)+len(c.Settings.Rsync.Ignore)) + combined = append(combined, c.Settings.Ignore...) + combined = append(combined, c.Settings.Rsync.Ignore...) + return combined +} + +// Load reads and parses a TOML config file. +func Load(path string) (*Config, error) { + v := viper.New() + v.SetConfigFile(path) + v.SetConfigType("toml") + + // Defaults + v.SetDefault("sync.interval", 1) + v.SetDefault("settings.watcher_debounce", 500) + v.SetDefault("settings.initial_sync", false) + v.SetDefault("settings.rsync.archive", true) + v.SetDefault("settings.rsync.compress", true) + v.SetDefault("settings.rsync.backup", false) + v.SetDefault("settings.rsync.backup_dir", ".rsync_backup") + v.SetDefault("settings.rsync.progress", true) + v.SetDefault("settings.log.format", "text") + + if err := v.ReadInConfig(); err != nil { + return nil, fmt.Errorf("reading config: %w", err) + } + + var cfg Config + if err := v.Unmarshal(&cfg); err != nil { + return nil, fmt.Errorf("parsing config: %w", err) + } + + if cfg.Sync.Local == "" { + return nil, fmt.Errorf("sync.local is required") + } + if cfg.Sync.Remote == "" { + return nil, fmt.Errorf("sync.remote is required") + } + + return &cfg, nil +} + +// FindConfigFile searches default locations for a config file. +func FindConfigFile() string { + home, _ := os.UserHomeDir() + paths := []string{ + filepath.Join(".", "esync.toml"), + filepath.Join(home, ".config", "esync", "config.toml"), + "/etc/esync/config.toml", + } + return FindConfigIn(paths) +} + +// FindConfigIn searches the given paths for the first existing file. +func FindConfigIn(paths []string) string { + for _, p := range paths { + if _, err := os.Stat(p); err == nil { + return p + } + } + return "" +} + +// DefaultTOML returns a default config as a TOML string. +func DefaultTOML() string { + return strings.TrimSpace(` +[sync] +local = "." +remote = "./remote" +interval = 1 + +# [sync.ssh] +# host = "example.com" +# user = "username" +# port = 22 +# identity_file = "~/.ssh/id_ed25519" +# interactive_auth = true + +[settings] +watcher_debounce = 500 +initial_sync = false +ignore = ["*.log", "*.tmp", ".env"] + +[settings.rsync] +archive = true +compress = true +backup = false +backup_dir = ".rsync_backup" +progress = true +extra_args = [] +ignore = [".git/", "node_modules/", "**/__pycache__/"] + +[settings.log] +# file = "~/.local/share/esync/esync.log" +format = "text" +`) + "\n" +} +``` + +**Step 4: Run tests to verify they pass** + +```bash +cd internal/config && go test -v +``` +Expected: all PASS + +**Step 5: Commit** + +```bash +git add internal/config/ +git commit -m "feat: add config package with TOML loading, defaults, and search path" +``` + +--- + +### Task 3: Syncer Package + +**Files:** +- Create: `internal/syncer/syncer.go` +- Create: `internal/syncer/syncer_test.go` + +**Step 1: Write failing tests for rsync command building** + +```go +// internal/syncer/syncer_test.go +package syncer + +import ( + "testing" + + "github.com/eloualiche/esync/internal/config" +) + +func TestBuildCommand_Local(t *testing.T) { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: "/tmp/src", + Remote: "/tmp/dst", + }, + Settings: config.Settings{ + Rsync: config.RsyncSettings{ + Archive: true, + Compress: true, + Progress: true, + Ignore: []string{".git/", "node_modules/"}, + }, + }, + } + + s := New(cfg) + cmd := s.BuildCommand() + + if cmd[0] != "rsync" { + t.Errorf("expected rsync, got %s", cmd[0]) + } + if !contains(cmd, "--archive") { + t.Error("expected --archive flag") + } + if !contains(cmd, "--compress") { + t.Error("expected --compress flag") + } + // Source should end with / + source := cmd[len(cmd)-2] + if source[len(source)-1] != '/' { + t.Errorf("source should end with /, got %s", source) + } +} + +func TestBuildCommand_Remote(t *testing.T) { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: "/tmp/src", + Remote: "user@host:/deploy", + }, + } + + s := New(cfg) + cmd := s.BuildCommand() + + dest := cmd[len(cmd)-1] + if dest != "user@host:/deploy" { + t.Errorf("expected user@host:/deploy, got %s", dest) + } +} + +func TestBuildCommand_SSHConfig(t *testing.T) { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: "/tmp/src", + Remote: "/deploy", + SSH: &config.SSHConfig{ + Host: "example.com", + User: "deploy", + Port: 2222, + IdentityFile: "~/.ssh/id_ed25519", + }, + }, + } + + s := New(cfg) + cmd := s.BuildCommand() + + dest := cmd[len(cmd)-1] + if dest != "deploy@example.com:/deploy" { + t.Errorf("expected deploy@example.com:/deploy, got %s", dest) + } + if !containsPrefix(cmd, "-e") { + t.Error("expected -e flag for SSH") + } +} + +func TestBuildCommand_ExcludePatterns(t *testing.T) { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: "/tmp/src", + Remote: "/tmp/dst", + }, + Settings: config.Settings{ + Ignore: []string{"*.log"}, + Rsync: config.RsyncSettings{ + Ignore: []string{".git/"}, + }, + }, + } + + s := New(cfg) + cmd := s.BuildCommand() + + excludeCount := 0 + for _, arg := range cmd { + if arg == "--exclude" { + excludeCount++ + } + } + if excludeCount != 2 { + t.Errorf("expected 2 exclude flags, got %d", excludeCount) + } +} + +func TestBuildCommand_ExtraArgs(t *testing.T) { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: "/tmp/src", + Remote: "/tmp/dst", + }, + Settings: config.Settings{ + Rsync: config.RsyncSettings{ + ExtraArgs: []string{"--delete", "--checksum"}, + }, + }, + } + + s := New(cfg) + cmd := s.BuildCommand() + + if !contains(cmd, "--delete") { + t.Error("expected --delete from extra_args") + } + if !contains(cmd, "--checksum") { + t.Error("expected --checksum from extra_args") + } +} + +func TestBuildCommand_DryRun(t *testing.T) { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: "/tmp/src", + Remote: "/tmp/dst", + }, + } + + s := New(cfg) + s.DryRun = true + cmd := s.BuildCommand() + + if !contains(cmd, "--dry-run") { + t.Error("expected --dry-run flag") + } +} + +func TestBuildCommand_Backup(t *testing.T) { + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: "/tmp/src", + Remote: "/tmp/dst", + }, + Settings: config.Settings{ + Rsync: config.RsyncSettings{ + Backup: true, + BackupDir: ".backup", + }, + }, + } + + s := New(cfg) + cmd := s.BuildCommand() + + if !contains(cmd, "--backup") { + t.Error("expected --backup flag") + } + if !contains(cmd, "--backup-dir=.backup") { + t.Error("expected --backup-dir flag") + } +} + +func contains(args []string, target string) bool { + for _, a := range args { + if a == target { + return true + } + } + return false +} + +func containsPrefix(args []string, prefix string) bool { + for _, a := range args { + if len(a) >= len(prefix) && a[:len(prefix)] == prefix { + return true + } + } + return false +} +``` + +**Step 2: Run tests to verify they fail** + +```bash +go test ./internal/syncer/ -v +``` + +**Step 3: Implement syncer package** + +```go +// internal/syncer/syncer.go +package syncer + +import ( + "fmt" + "os/exec" + "regexp" + "strconv" + "strings" + "time" + + "github.com/eloualiche/esync/internal/config" +) + +// Result holds the outcome of a sync operation. +type Result struct { + Success bool + FilesCount int + BytesTotal int64 + Duration time.Duration + Files []string + ErrorMessage string +} + +// Syncer builds and executes rsync commands. +type Syncer struct { + cfg *config.Config + DryRun bool +} + +// New creates a new Syncer. +func New(cfg *config.Config) *Syncer { + return &Syncer{cfg: cfg} +} + +// BuildCommand constructs the rsync argument list. +func (s *Syncer) BuildCommand() []string { + cmd := []string{"rsync", "--recursive", "--times", "--progress", "--copy-unsafe-links"} + + rs := s.cfg.Settings.Rsync + if rs.Archive { + cmd = append(cmd, "--archive") + } + if rs.Compress { + cmd = append(cmd, "--compress") + } + if rs.Backup { + cmd = append(cmd, "--backup") + cmd = append(cmd, fmt.Sprintf("--backup-dir=%s", rs.BackupDir)) + } + if s.DryRun { + cmd = append(cmd, "--dry-run") + } + + // Exclude patterns + for _, pattern := range s.cfg.AllIgnorePatterns() { + clean := strings.Trim(pattern, "\"[]'") + if strings.HasPrefix(clean, "**/") { + clean = clean[3:] + } + cmd = append(cmd, "--exclude", clean) + } + + // Extra args passthrough + cmd = append(cmd, rs.ExtraArgs...) + + // SSH options + sshCmd := s.buildSSHCommand() + if sshCmd != "" { + cmd = append(cmd, "-e", sshCmd) + } + + // Source (always ends with /) + source := s.cfg.Sync.Local + if !strings.HasSuffix(source, "/") { + source += "/" + } + cmd = append(cmd, source) + + // Destination + cmd = append(cmd, s.buildDestination()) + + return cmd +} + +// Run executes the rsync command and returns the result. +func (s *Syncer) Run() (*Result, error) { + args := s.BuildCommand() + start := time.Now() + + c := exec.Command(args[0], args[1:]...) + output, err := c.CombinedOutput() + duration := time.Since(start) + + result := &Result{ + Duration: duration, + Files: extractFiles(string(output)), + } + + if err != nil { + result.Success = false + result.ErrorMessage = strings.TrimSpace(string(output)) + if result.ErrorMessage == "" { + result.ErrorMessage = err.Error() + } + return result, err + } + + result.Success = true + result.FilesCount, result.BytesTotal = extractStats(string(output)) + return result, nil +} + +func (s *Syncer) buildSSHCommand() string { + ssh := s.cfg.Sync.SSH + if ssh == nil { + return "" + } + parts := []string{"ssh"} + if ssh.Port != 0 && ssh.Port != 22 { + parts = append(parts, fmt.Sprintf("-p %d", ssh.Port)) + } + if ssh.IdentityFile != "" { + parts = append(parts, fmt.Sprintf("-i %s", ssh.IdentityFile)) + } + // ControlMaster for SSH keepalive + parts = append(parts, "-o", "ControlMaster=auto") + parts = append(parts, "-o", "ControlPath=/tmp/esync-ssh-%r@%h:%p") + parts = append(parts, "-o", "ControlPersist=600") + if len(parts) == 1 { + return "" + } + return strings.Join(parts, " ") +} + +func (s *Syncer) buildDestination() string { + ssh := s.cfg.Sync.SSH + if ssh != nil && ssh.Host != "" { + if ssh.User != "" { + return fmt.Sprintf("%s@%s:%s", ssh.User, ssh.Host, s.cfg.Sync.Remote) + } + return fmt.Sprintf("%s:%s", ssh.Host, s.cfg.Sync.Remote) + } + return s.cfg.Sync.Remote +} + +func extractFiles(output string) []string { + var files []string + skip := regexp.MustCompile(`^(building|sending|sent|total|bytes|\s*$)`) + for _, line := range strings.Split(output, "\n") { + trimmed := strings.TrimSpace(line) + if trimmed == "" || skip.MatchString(trimmed) { + continue + } + parts := strings.Fields(trimmed) + if len(parts) > 0 && !strings.Contains(parts[0], "%") { + files = append(files, parts[0]) + } + } + return files +} + +func extractStats(output string) (int, int64) { + fileRe := regexp.MustCompile(`(\d+) files? to consider`) + bytesRe := regexp.MustCompile(`sent ([\d,]+) bytes\s+received ([\d,]+) bytes`) + + var count int + var total int64 + + if m := fileRe.FindStringSubmatch(output); len(m) > 1 { + count, _ = strconv.Atoi(m[1]) + } + if m := bytesRe.FindStringSubmatch(output); len(m) > 2 { + sent, _ := strconv.ParseInt(strings.ReplaceAll(m[1], ",", ""), 10, 64) + recv, _ := strconv.ParseInt(strings.ReplaceAll(m[2], ",", ""), 10, 64) + total = sent + recv + } + return count, total +} +``` + +**Step 4: Run tests** + +```bash +go test ./internal/syncer/ -v +``` +Expected: all PASS + +**Step 5: Commit** + +```bash +git add internal/syncer/ +git commit -m "feat: add syncer package with rsync command builder and SSH support" +``` + +--- + +### Task 4: Watcher Package + +**Files:** +- Create: `internal/watcher/watcher.go` +- Create: `internal/watcher/watcher_test.go` + +**Step 1: Write failing tests for debouncer** + +```go +// internal/watcher/watcher_test.go +package watcher + +import ( + "sync/atomic" + "testing" + "time" +) + +func TestDebouncerBatchesEvents(t *testing.T) { + var callCount atomic.Int32 + callback := func() { callCount.Add(1) } + + d := NewDebouncer(100*time.Millisecond, callback) + defer d.Stop() + + // Fire 5 events rapidly + for i := 0; i < 5; i++ { + d.Trigger() + time.Sleep(10 * time.Millisecond) + } + + // Wait for debounce window to expire + time.Sleep(200 * time.Millisecond) + + if count := callCount.Load(); count != 1 { + t.Errorf("expected 1 callback, got %d", count) + } +} + +func TestDebouncerSeparateEvents(t *testing.T) { + var callCount atomic.Int32 + callback := func() { callCount.Add(1) } + + d := NewDebouncer(50*time.Millisecond, callback) + defer d.Stop() + + d.Trigger() + time.Sleep(100 * time.Millisecond) // Wait for first debounce + + d.Trigger() + time.Sleep(100 * time.Millisecond) // Wait for second debounce + + if count := callCount.Load(); count != 2 { + t.Errorf("expected 2 callbacks, got %d", count) + } +} +``` + +**Step 2: Run tests to verify they fail** + +```bash +go test ./internal/watcher/ -v +``` + +**Step 3: Implement watcher package** + +```go +// internal/watcher/watcher.go +package watcher + +import ( + "log" + "path/filepath" + "sync" + "time" + + "github.com/fsnotify/fsnotify" +) + +// Debouncer batches rapid events into a single callback. +type Debouncer struct { + interval time.Duration + callback func() + timer *time.Timer + mu sync.Mutex + stopped bool +} + +// NewDebouncer creates a debouncer with the given interval. +func NewDebouncer(interval time.Duration, callback func()) *Debouncer { + return &Debouncer{ + interval: interval, + callback: callback, + } +} + +// Trigger resets the debounce timer. +func (d *Debouncer) Trigger() { + d.mu.Lock() + defer d.mu.Unlock() + if d.stopped { + return + } + if d.timer != nil { + d.timer.Stop() + } + d.timer = time.AfterFunc(d.interval, d.callback) +} + +// Stop cancels any pending callback. +func (d *Debouncer) Stop() { + d.mu.Lock() + defer d.mu.Unlock() + d.stopped = true + if d.timer != nil { + d.timer.Stop() + } +} + +// EventHandler is called when files change. +type EventHandler func() + +// Watcher monitors a directory for changes using fsnotify. +type Watcher struct { + fsw *fsnotify.Watcher + debouncer *Debouncer + path string + ignores []string + done chan struct{} +} + +// New creates a file watcher for the given path. +func New(path string, debounceMs int, ignores []string, handler EventHandler) (*Watcher, error) { + fsw, err := fsnotify.NewWatcher() + if err != nil { + return nil, err + } + + interval := time.Duration(debounceMs) * time.Millisecond + if interval == 0 { + interval = 500 * time.Millisecond + } + + w := &Watcher{ + fsw: fsw, + debouncer: NewDebouncer(interval, handler), + path: path, + ignores: ignores, + done: make(chan struct{}), + } + + return w, nil +} + +// Start begins watching for file changes. +func (w *Watcher) Start() error { + if err := w.addRecursive(w.path); err != nil { + return err + } + + go w.loop() + return nil +} + +// Stop ends the watcher. +func (w *Watcher) Stop() { + w.debouncer.Stop() + w.fsw.Close() + <-w.done +} + +// Paused tracks whether watching is paused. +var Paused bool + +func (w *Watcher) loop() { + defer close(w.done) + for { + select { + case event, ok := <-w.fsw.Events: + if !ok { + return + } + if Paused { + continue + } + if w.shouldIgnore(event.Name) { + continue + } + if event.Op&(fsnotify.Write|fsnotify.Create|fsnotify.Remove|fsnotify.Rename) != 0 { + // If a directory was created, watch it too + if event.Op&fsnotify.Create != 0 { + w.addRecursive(event.Name) + } + w.debouncer.Trigger() + } + case err, ok := <-w.fsw.Errors: + if !ok { + return + } + log.Printf("watcher error: %v", err) + } + } +} + +func (w *Watcher) shouldIgnore(path string) bool { + base := filepath.Base(path) + for _, pattern := range w.ignores { + if matched, _ := filepath.Match(pattern, base); matched { + return true + } + if matched, _ := filepath.Match(pattern, path); matched { + return true + } + } + return false +} + +func (w *Watcher) addRecursive(path string) error { + return filepath.Walk(path, func(p string, info interface{}, err error) error { + if err != nil { + return nil // skip errors + } + // Only add directories + return w.fsw.Add(p) + }) +} +``` + +Note: `addRecursive` uses `filepath.Walk` which needs `os.FileInfo`, not `interface{}`. The actual implementation should use the correct signature. The executing agent will fix this during implementation. + +**Step 4: Run tests** + +```bash +go test ./internal/watcher/ -v +``` +Expected: all PASS + +**Step 5: Commit** + +```bash +git add internal/watcher/ +git commit -m "feat: add watcher package with fsnotify and debouncing" +``` + +--- + +### Task 5: Logger Package + +**Files:** +- Create: `internal/logger/logger.go` +- Create: `internal/logger/logger_test.go` + +**Step 1: Write failing tests** + +```go +// internal/logger/logger_test.go +package logger + +import ( + "encoding/json" + "os" + "path/filepath" + "strings" + "testing" +) + +func TestJSONLogger(t *testing.T) { + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + l, err := New(logPath, "json") + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + defer l.Close() + + l.Info("synced", map[string]interface{}{ + "file": "main.go", + "size": 2150, + }) + + data, _ := os.ReadFile(logPath) + lines := strings.TrimSpace(string(data)) + + var entry map[string]interface{} + if err := json.Unmarshal([]byte(lines), &entry); err != nil { + t.Fatalf("invalid JSON: %v\nline: %s", err, lines) + } + if entry["level"] != "info" { + t.Errorf("expected level=info, got %v", entry["level"]) + } + if entry["event"] != "synced" { + t.Errorf("expected event=synced, got %v", entry["event"]) + } +} + +func TestTextLogger(t *testing.T) { + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + l, err := New(logPath, "text") + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + defer l.Close() + + l.Info("synced", map[string]interface{}{"file": "main.go"}) + + data, _ := os.ReadFile(logPath) + line := string(data) + if !strings.Contains(line, "INF") { + t.Errorf("expected INF in text log, got: %s", line) + } + if !strings.Contains(line, "synced") { + t.Errorf("expected 'synced' in text log, got: %s", line) + } +} +``` + +**Step 2: Run tests to verify they fail** + +```bash +go test ./internal/logger/ -v +``` + +**Step 3: Implement logger** + +```go +// internal/logger/logger.go +package logger + +import ( + "encoding/json" + "fmt" + "os" + "strings" + "sync" + "time" +) + +// Logger writes structured log entries to a file. +type Logger struct { + file *os.File + format string // "json" or "text" + mu sync.Mutex +} + +// New creates a logger writing to the given path. +func New(path string, format string) (*Logger, error) { + if format == "" { + format = "text" + } + f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644) + if err != nil { + return nil, err + } + return &Logger{file: f, format: format}, nil +} + +// Close closes the log file. +func (l *Logger) Close() { + if l.file != nil { + l.file.Close() + } +} + +// Info logs an info-level entry. +func (l *Logger) Info(event string, fields map[string]interface{}) { + l.log("info", event, fields) +} + +// Warn logs a warning-level entry. +func (l *Logger) Warn(event string, fields map[string]interface{}) { + l.log("warn", event, fields) +} + +// Error logs an error-level entry. +func (l *Logger) Error(event string, fields map[string]interface{}) { + l.log("error", event, fields) +} + +// Debug logs a debug-level entry. +func (l *Logger) Debug(event string, fields map[string]interface{}) { + l.log("debug", event, fields) +} + +func (l *Logger) log(level, event string, fields map[string]interface{}) { + l.mu.Lock() + defer l.mu.Unlock() + + now := time.Now().Format("15:04:05") + + if l.format == "json" { + entry := map[string]interface{}{ + "time": now, + "level": level, + "event": event, + } + for k, v := range fields { + entry[k] = v + } + data, _ := json.Marshal(entry) + fmt.Fprintln(l.file, string(data)) + } else { + tag := strings.ToUpper(level[:3]) + parts := []string{fmt.Sprintf("%s %s %s", now, tag, event)} + for k, v := range fields { + parts = append(parts, fmt.Sprintf("%s=%v", k, v)) + } + fmt.Fprintln(l.file, strings.Join(parts, " ")) + } +} +``` + +**Step 4: Run tests** + +```bash +go test ./internal/logger/ -v +``` +Expected: all PASS + +**Step 5: Commit** + +```bash +git add internal/logger/ +git commit -m "feat: add logger package with JSON and text output" +``` + +--- + +### Task 6: TUI — Styles and Dashboard + +**Files:** +- Create: `internal/tui/styles.go` +- Create: `internal/tui/dashboard.go` +- Create: `internal/tui/app.go` + +**Step 1: Create Lipgloss styles** + +```go +// internal/tui/styles.go +package tui + +import "github.com/charmbracelet/lipgloss" + +var ( + titleStyle = lipgloss.NewStyle(). + Bold(true). + Foreground(lipgloss.Color("12")) // blue + + statusSynced = lipgloss.NewStyle(). + Foreground(lipgloss.Color("10")) // green + + statusSyncing = lipgloss.NewStyle(). + Foreground(lipgloss.Color("11")) // yellow + + statusError = lipgloss.NewStyle(). + Foreground(lipgloss.Color("9")) // red + + dimStyle = lipgloss.NewStyle(). + Foreground(lipgloss.Color("8")) // dim gray + + sectionStyle = lipgloss.NewStyle(). + BorderStyle(lipgloss.NormalBorder()). + BorderBottom(true). + BorderForeground(lipgloss.Color("8")) + + helpStyle = lipgloss.NewStyle(). + Foreground(lipgloss.Color("8")) +) +``` + +**Step 2: Create dashboard Bubbletea model** + +```go +// internal/tui/dashboard.go +package tui + +import ( + "fmt" + "strings" + "time" + + tea "github.com/charmbracelet/bubbletea" +) + +// SyncEvent represents a file sync event for display. +type SyncEvent struct { + File string + Size string + Duration time.Duration + Status string // "synced", "syncing", "error" + Time time.Time +} + +// DashboardModel is the main TUI view. +type DashboardModel struct { + local string + remote string + status string // "watching", "syncing", "paused", "error" + lastSync time.Time + events []SyncEvent + totalSynced int + totalBytes string + totalErrors int + width int + height int + filter string + filtering bool +} + +// NewDashboard creates the dashboard model. +func NewDashboard(local, remote string) DashboardModel { + return DashboardModel{ + local: local, + remote: remote, + status: "watching", + events: []SyncEvent{}, + } +} + +func (m DashboardModel) Init() tea.Cmd { + return tickCmd() +} + +// tickMsg triggers periodic UI refresh. +type tickMsg time.Time + +func tickCmd() tea.Cmd { + return tea.Tick(time.Second, func(t time.Time) tea.Msg { + return tickMsg(t) + }) +} + +// SyncEventMsg delivers a sync event to the TUI. +type SyncEventMsg SyncEvent + +// SyncStatsMsg updates aggregate stats. +type SyncStatsMsg struct { + TotalSynced int + TotalBytes string + TotalErrors int +} + +func (m DashboardModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { + switch msg := msg.(type) { + case tea.KeyMsg: + if m.filtering { + switch msg.String() { + case "enter", "esc": + m.filtering = false + if msg.String() == "esc" { + m.filter = "" + } + return m, nil + case "backspace": + if len(m.filter) > 0 { + m.filter = m.filter[:len(m.filter)-1] + } + return m, nil + default: + if len(msg.String()) == 1 { + m.filter += msg.String() + } + return m, nil + } + } + + switch msg.String() { + case "q", "ctrl+c": + return m, tea.Quit + case "p": + if m.status == "paused" { + m.status = "watching" + } else if m.status == "watching" { + m.status = "paused" + } + return m, nil + case "/": + m.filtering = true + m.filter = "" + return m, nil + } + + case tea.WindowSizeMsg: + m.width = msg.Width + m.height = msg.Height + + case tickMsg: + return m, tickCmd() + + case SyncEventMsg: + e := SyncEvent(msg) + m.events = append([]SyncEvent{e}, m.events...) + if len(m.events) > 100 { + m.events = m.events[:100] + } + if e.Status == "synced" { + m.lastSync = e.Time + } + + case SyncStatsMsg: + m.totalSynced = msg.TotalSynced + m.totalBytes = msg.TotalBytes + m.totalErrors = msg.TotalErrors + } + + return m, nil +} + +func (m DashboardModel) View() string { + var b strings.Builder + + // Header + title := titleStyle.Render(" esync ") + separator := dimStyle.Render(strings.Repeat("─", max(0, m.width-8))) + b.WriteString(title + separator + "\n") + b.WriteString(fmt.Sprintf(" %s → %s\n", m.local, m.remote)) + + // Status + var statusStr string + switch m.status { + case "watching": + ago := "" + if !m.lastSync.IsZero() { + ago = fmt.Sprintf(" (synced %s ago)", time.Since(m.lastSync).Round(time.Second)) + } + statusStr = statusSynced.Render("●") + " Watching" + dimStyle.Render(ago) + case "syncing": + statusStr = statusSyncing.Render("⟳") + " Syncing..." + case "paused": + statusStr = dimStyle.Render("⏸") + " Paused" + case "error": + statusStr = statusError.Render("✗") + " Error" + } + b.WriteString(" " + statusStr + "\n\n") + + // Recent events + b.WriteString(" " + dimStyle.Render("Recent "+strings.Repeat("─", max(0, m.width-12))) + "\n") + filtered := m.filteredEvents() + shown := min(10, len(filtered)) + for i := 0; i < shown; i++ { + e := filtered[i] + var icon string + switch e.Status { + case "synced": + icon = statusSynced.Render("✓") + case "syncing": + icon = statusSyncing.Render("⟳") + case "error": + icon = statusError.Render("✗") + } + dur := "" + if e.Duration > 0 { + dur = dimStyle.Render(fmt.Sprintf("%.1fs", e.Duration.Seconds())) + } + b.WriteString(fmt.Sprintf(" %s %-30s %8s %s\n", icon, e.File, e.Size, dur)) + } + b.WriteString("\n") + + // Stats + b.WriteString(" " + dimStyle.Render("Stats "+strings.Repeat("─", max(0, m.width-10))) + "\n") + stats := fmt.Sprintf(" %d synced │ %s total │ %d errors", + m.totalSynced, m.totalBytes, m.totalErrors) + b.WriteString(dimStyle.Render(stats) + "\n\n") + + // Help bar + help := " q quit p pause r full resync l logs d dry-run / filter" + if m.filtering { + help = fmt.Sprintf(" filter: %s█ (enter to apply, esc to cancel)", m.filter) + } + b.WriteString(helpStyle.Render(help) + "\n") + + return b.String() +} + +func (m DashboardModel) filteredEvents() []SyncEvent { + if m.filter == "" { + return m.events + } + var filtered []SyncEvent + for _, e := range m.events { + if strings.Contains(strings.ToLower(e.File), strings.ToLower(m.filter)) { + filtered = append(filtered, e) + } + } + return filtered +} + +func max(a, b int) int { + if a > b { + return a + } + return b +} + +func min(a, b int) int { + if a < b { + return a + } + return b +} +``` + +**Step 3: Create log view model** + +```go +// internal/tui/logview.go +package tui + +import ( + "fmt" + "strings" + "time" + + tea "github.com/charmbracelet/bubbletea" +) + +// LogEntry is a single log line. +type LogEntry struct { + Time time.Time + Level string // "INF", "WRN", "ERR" + Message string +} + +// LogViewModel shows scrollable logs. +type LogViewModel struct { + entries []LogEntry + offset int + width int + height int + filter string + filtering bool +} + +// NewLogView creates an empty log view. +func NewLogView() LogViewModel { + return LogViewModel{} +} + +func (m LogViewModel) Init() tea.Cmd { return nil } + +func (m LogViewModel) Update(msg tea.Msg) (LogViewModel, tea.Cmd) { + switch msg := msg.(type) { + case tea.KeyMsg: + if m.filtering { + switch msg.String() { + case "enter", "esc": + m.filtering = false + if msg.String() == "esc" { + m.filter = "" + } + case "backspace": + if len(m.filter) > 0 { + m.filter = m.filter[:len(m.filter)-1] + } + default: + if len(msg.String()) == 1 { + m.filter += msg.String() + } + } + return m, nil + } + + switch msg.String() { + case "up", "k": + if m.offset > 0 { + m.offset-- + } + case "down", "j": + m.offset++ + case "/": + m.filtering = true + m.filter = "" + } + + case tea.WindowSizeMsg: + m.width = msg.Width + m.height = msg.Height + } + + return m, nil +} + +func (m LogViewModel) View() string { + var b strings.Builder + + title := titleStyle.Render(" esync ─ logs ") + separator := dimStyle.Render(strings.Repeat("─", max(0, m.width-16))) + b.WriteString(title + separator + "\n") + + filtered := m.filteredEntries() + visible := m.height - 4 // header + help + if visible < 1 { + visible = 10 + } + + start := m.offset + if start > len(filtered)-visible { + start = max(0, len(filtered)-visible) + } + end := min(start+visible, len(filtered)) + + for i := start; i < end; i++ { + e := filtered[i] + ts := dimStyle.Render(e.Time.Format("15:04:05")) + var lvl string + switch e.Level { + case "INF": + lvl = statusSynced.Render("INF") + case "WRN": + lvl = statusSyncing.Render("WRN") + case "ERR": + lvl = statusError.Render("ERR") + default: + lvl = dimStyle.Render(e.Level) + } + b.WriteString(fmt.Sprintf(" %s %s %s\n", ts, lvl, e.Message)) + } + + b.WriteString("\n") + help := " ↑↓ scroll / filter l back q quit" + if m.filtering { + help = fmt.Sprintf(" filter: %s█ (enter to apply, esc to cancel)", m.filter) + } + b.WriteString(helpStyle.Render(help) + "\n") + + return b.String() +} + +func (m LogViewModel) filteredEntries() []LogEntry { + if m.filter == "" { + return m.entries + } + var out []LogEntry + for _, e := range m.entries { + if strings.Contains(strings.ToLower(e.Message), strings.ToLower(m.filter)) { + out = append(out, e) + } + } + return out +} + +// AddEntry adds a log entry (called from outside the TUI update loop via a Cmd). +func (m *LogViewModel) AddEntry(entry LogEntry) { + m.entries = append(m.entries, entry) +} +``` + +**Step 4: Create app model (root TUI that switches between dashboard and log view)** + +```go +// internal/tui/app.go +package tui + +import ( + tea "github.com/charmbracelet/bubbletea" +) + +type view int + +const ( + viewDashboard view = iota + viewLogs +) + +// AppModel is the root Bubbletea model. +type AppModel struct { + dashboard DashboardModel + logView LogViewModel + current view + // Channels for external events + syncEvents chan SyncEvent + logEntries chan LogEntry +} + +// NewApp creates the root TUI model. +func NewApp(local, remote string) *AppModel { + return &AppModel{ + dashboard: NewDashboard(local, remote), + logView: NewLogView(), + current: viewDashboard, + syncEvents: make(chan SyncEvent, 100), + logEntries: make(chan LogEntry, 100), + } +} + +// SyncEventChan returns the channel to send sync events to the TUI. +func (m *AppModel) SyncEventChan() chan<- SyncEvent { + return m.syncEvents +} + +// LogEntryChan returns the channel to send log entries to the TUI. +func (m *AppModel) LogEntryChan() chan<- LogEntry { + return m.logEntries +} + +func (m *AppModel) Init() tea.Cmd { + return tea.Batch( + m.dashboard.Init(), + m.waitForSyncEvent(), + m.waitForLogEntry(), + ) +} + +func (m *AppModel) waitForSyncEvent() tea.Cmd { + return func() tea.Msg { + e := <-m.syncEvents + return SyncEventMsg(e) + } +} + +func (m *AppModel) waitForLogEntry() tea.Cmd { + return func() tea.Msg { + return <-m.logEntries + } +} + +func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { + switch msg := msg.(type) { + case tea.KeyMsg: + switch msg.String() { + case "l": + if m.current == viewDashboard { + m.current = viewLogs + } else { + m.current = viewDashboard + } + return m, nil + case "q", "ctrl+c": + return m, tea.Quit + } + + case SyncEventMsg: + var cmd tea.Cmd + var model tea.Model + model, cmd = m.dashboard.Update(msg) + m.dashboard = model.(DashboardModel) + return m, tea.Batch(cmd, m.waitForSyncEvent()) + + case LogEntry: + m.logView.AddEntry(msg) + return m, m.waitForLogEntry() + } + + // Delegate to current view + switch m.current { + case viewDashboard: + var cmd tea.Cmd + var model tea.Model + model, cmd = m.dashboard.Update(msg) + m.dashboard = model.(DashboardModel) + return m, cmd + case viewLogs: + var cmd tea.Cmd + m.logView, cmd = m.logView.Update(msg) + return m, cmd + } + + return m, nil +} + +func (m *AppModel) View() string { + switch m.current { + case viewLogs: + return m.logView.View() + default: + return m.dashboard.View() + } +} +``` + +**Step 5: Verify build** + +```bash +go build ./... +``` + +**Step 6: Commit** + +```bash +git add internal/tui/ +git commit -m "feat: add TUI with dashboard, log view, and Lipgloss styles" +``` + +--- + +### Task 7: CLI Commands — sync + +**Files:** +- Create: `cmd/sync.go` +- Modify: `cmd/root.go` + +**Step 1: Implement sync command** + +```go +// cmd/sync.go +package cmd + +import ( + "fmt" + "os" + "time" + + tea "github.com/charmbracelet/bubbletea" + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" + "github.com/eloualiche/esync/internal/logger" + "github.com/eloualiche/esync/internal/syncer" + "github.com/eloualiche/esync/internal/tui" + "github.com/eloualiche/esync/internal/watcher" +) + +var ( + localPath string + remotePath string + daemon bool + dryRun bool + initialSync bool + verbose bool +) + +var syncCmd = &cobra.Command{ + Use: "sync", + Short: "Start watching and syncing files", + Long: "Watch a local directory for changes and sync them to a remote destination using rsync.", + RunE: runSync, +} + +func init() { + syncCmd.Flags().StringVarP(&localPath, "local", "l", "", "local path to sync from") + syncCmd.Flags().StringVarP(&remotePath, "remote", "r", "", "remote path to sync to") + syncCmd.Flags().BoolVar(&daemon, "daemon", false, "run without TUI, log to file") + syncCmd.Flags().BoolVar(&dryRun, "dry-run", false, "show what would sync without executing") + syncCmd.Flags().BoolVar(&initialSync, "initial-sync", false, "force full sync on startup") + syncCmd.Flags().BoolVarP(&verbose, "verbose", "v", false, "verbose output") + + rootCmd.AddCommand(syncCmd) +} + +func runSync(cmd *cobra.Command, args []string) error { + cfg, err := loadOrBuildConfig() + if err != nil { + return err + } + + // CLI overrides + if localPath != "" { + cfg.Sync.Local = localPath + } + if remotePath != "" { + cfg.Sync.Remote = remotePath + } + if initialSync { + cfg.Settings.InitialSync = true + } + + if cfg.Sync.Local == "" || cfg.Sync.Remote == "" { + return fmt.Errorf("both local and remote paths are required (use -l and -r, or a config file)") + } + + s := syncer.New(cfg) + s.DryRun = dryRun + + // Optional initial sync + if cfg.Settings.InitialSync { + fmt.Println("Running initial sync...") + if result, err := s.Run(); err != nil { + fmt.Fprintf(os.Stderr, "initial sync failed: %s\n", result.ErrorMessage) + } + } + + if daemon { + return runDaemon(cfg, s) + } + return runTUI(cfg, s) +} + +func runTUI(cfg *config.Config, s *syncer.Syncer) error { + app := tui.NewApp(cfg.Sync.Local, cfg.Sync.Remote) + + // Set up watcher + handler := func() { + result, err := s.Run() + event := tui.SyncEvent{ + Time: time.Now(), + } + if err != nil { + event.Status = "error" + event.File = result.ErrorMessage + } else { + event.Status = "synced" + event.Duration = result.Duration + if len(result.Files) > 0 { + event.File = result.Files[0] + } else { + event.File = "(no changes)" + } + event.Size = formatSize(result.BytesTotal) + } + app.SyncEventChan() <- event + } + + w, err := watcher.New( + cfg.Sync.Local, + cfg.Settings.WatcherDebounce, + cfg.AllIgnorePatterns(), + handler, + ) + if err != nil { + return fmt.Errorf("creating watcher: %w", err) + } + + if err := w.Start(); err != nil { + return fmt.Errorf("starting watcher: %w", err) + } + defer w.Stop() + + p := tea.NewProgram(app, tea.WithAltScreen()) + if _, err := p.Run(); err != nil { + return err + } + return nil +} + +func runDaemon(cfg *config.Config, s *syncer.Syncer) error { + logPath := cfg.Settings.Log.File + if logPath == "" { + logPath = "esync.log" + } + logFormat := cfg.Settings.Log.Format + + l, err := logger.New(logPath, logFormat) + if err != nil { + return fmt.Errorf("creating logger: %w", err) + } + defer l.Close() + + fmt.Printf("esync daemon started (PID %d)\n", os.Getpid()) + fmt.Printf("Watching: %s → %s\n", cfg.Sync.Local, cfg.Sync.Remote) + fmt.Printf("Log: %s\n", logPath) + + l.Info("started", map[string]interface{}{ + "local": cfg.Sync.Local, + "remote": cfg.Sync.Remote, + "pid": os.Getpid(), + }) + + handler := func() { + result, err := s.Run() + if err != nil { + l.Error("sync_failed", map[string]interface{}{ + "error": result.ErrorMessage, + }) + fmt.Print("\a") // terminal bell on error + } else { + fields := map[string]interface{}{ + "duration_ms": result.Duration.Milliseconds(), + "bytes": result.BytesTotal, + } + if len(result.Files) > 0 { + fields["file"] = result.Files[0] + } + l.Info("synced", fields) + } + } + + w, err := watcher.New( + cfg.Sync.Local, + cfg.Settings.WatcherDebounce, + cfg.AllIgnorePatterns(), + handler, + ) + if err != nil { + return fmt.Errorf("creating watcher: %w", err) + } + + if err := w.Start(); err != nil { + return fmt.Errorf("starting watcher: %w", err) + } + defer w.Stop() + + // Block until interrupted + select {} +} + +func loadOrBuildConfig() (*config.Config, error) { + if cfgFile != "" { + return config.Load(cfgFile) + } + + // Quick mode: local + remote provided directly + if localPath != "" && remotePath != "" { + return &config.Config{ + Sync: config.SyncSection{ + Local: localPath, + Remote: remotePath, + }, + Settings: config.Settings{ + WatcherDebounce: 500, + Rsync: config.RsyncSettings{ + Archive: true, + Compress: true, + Progress: true, + }, + }, + }, nil + } + + // Try to find config file + path := config.FindConfigFile() + if path == "" { + return nil, fmt.Errorf("no config file found; use -c, create esync.toml, or pass -l and -r") + } + return config.Load(path) +} + +func formatSize(bytes int64) string { + switch { + case bytes < 1024: + return fmt.Sprintf("%dB", bytes) + case bytes < 1024*1024: + return fmt.Sprintf("%.1fKB", float64(bytes)/1024) + case bytes < 1024*1024*1024: + return fmt.Sprintf("%.1fMB", float64(bytes)/(1024*1024)) + default: + return fmt.Sprintf("%.2fGB", float64(bytes)/(1024*1024*1024)) + } +} +``` + +**Step 2: Verify build** + +```bash +go build ./... +``` + +**Step 3: Manual test** + +```bash +mkdir -p /tmp/esync-test-src /tmp/esync-test-dst +echo "hello" > /tmp/esync-test-src/test.txt +go run . sync -l /tmp/esync-test-src -r /tmp/esync-test-dst +# TUI should appear. Modify test.txt in another terminal. Press q to quit. +``` + +**Step 4: Commit** + +```bash +git add cmd/sync.go +git commit -m "feat: add sync command with TUI and daemon modes" +``` + +--- + +### Task 8: CLI Commands — init (smart) + +**Files:** +- Create: `cmd/init.go` + +**Step 1: Implement smart init** + +```go +// cmd/init.go +package cmd + +import ( + "bufio" + "fmt" + "os" + "path/filepath" + "strings" + + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" +) + +var initRemote string + +var initCmd = &cobra.Command{ + Use: "init", + Short: "Generate esync.toml from current directory", + Long: "Create an esync.toml config file by inspecting the current directory, importing .gitignore patterns, and detecting common exclusions.", + RunE: runInit, +} + +func init() { + initCmd.Flags().StringVarP(&initRemote, "remote", "r", "", "pre-fill remote destination") + rootCmd.AddCommand(initCmd) +} + +func runInit(cmd *cobra.Command, args []string) error { + outPath := "esync.toml" + if cfgFile != "" { + outPath = cfgFile + } + + // Check if file exists + if _, err := os.Stat(outPath); err == nil { + fmt.Printf("Config file %s already exists. Overwrite? [y/N] ", outPath) + reader := bufio.NewReader(os.Stdin) + answer, _ := reader.ReadString('\n') + answer = strings.TrimSpace(strings.ToLower(answer)) + if answer != "y" && answer != "yes" { + fmt.Println("Aborted.") + return nil + } + } + + // Start with default TOML + content := config.DefaultTOML() + + // Detect .gitignore + gitignorePatterns := readGitignore() + if len(gitignorePatterns) > 0 { + fmt.Printf("Detected .gitignore — imported %d patterns\n", len(gitignorePatterns)) + } + + // Detect common directories to exclude + autoExclude := detectCommonDirs() + if len(autoExclude) > 0 { + fmt.Printf("Auto-excluding: %s\n", strings.Join(autoExclude, ", ")) + } + + // Prompt for remote if not provided + remote := initRemote + if remote == "" { + fmt.Print("Remote destination? (e.g. user@host:/path) ") + reader := bufio.NewReader(os.Stdin) + remote, _ = reader.ReadString('\n') + remote = strings.TrimSpace(remote) + } + if remote != "" { + content = strings.Replace(content, `remote = "./remote"`, fmt.Sprintf(`remote = "%s"`, remote), 1) + } + + // Merge extra ignore patterns into rsync ignore + if len(gitignorePatterns) > 0 || len(autoExclude) > 0 { + allExtra := append(gitignorePatterns, autoExclude...) + // Build the ignore array string + var quoted []string + for _, p := range allExtra { + quoted = append(quoted, fmt.Sprintf(`"%s"`, p)) + } + extraLine := strings.Join(quoted, ", ") + // Append to existing ignore array + content = strings.Replace(content, + `ignore = [".git/", "node_modules/", "**/__pycache__/"]`, + fmt.Sprintf(`ignore = [".git/", "node_modules/", "**/__pycache__/", %s]`, extraLine), + 1, + ) + } + + if err := os.WriteFile(outPath, []byte(content), 0644); err != nil { + return fmt.Errorf("writing config: %w", err) + } + + fmt.Printf("\nWritten: %s\n", outPath) + fmt.Println("\nRun `esync check` for file preview, `esync edit` to adjust") + + return nil +} + +func readGitignore() []string { + f, err := os.Open(".gitignore") + if err != nil { + return nil + } + defer f.Close() + + var patterns []string + scanner := bufio.NewScanner(f) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + if line == "" || strings.HasPrefix(line, "#") { + continue + } + // Skip patterns we already have as defaults + if line == ".git" || line == ".git/" || line == "node_modules" || line == "node_modules/" || line == "__pycache__" || line == "__pycache__/" { + continue + } + patterns = append(patterns, line) + } + return patterns +} + +func detectCommonDirs() []string { + common := []string{".git/", "node_modules/", "__pycache__/", "build/", ".venv/", "dist/", ".tox/", ".mypy_cache/"} + var found []string + for _, dir := range common { + clean := strings.TrimSuffix(dir, "/") + if info, err := os.Stat(clean); err == nil && info.IsDir() { + // Skip ones already in default config + if dir == ".git/" || dir == "node_modules/" || dir == "__pycache__/" { + continue + } + found = append(found, dir) + } + } + return found +} +``` + +**Step 2: Verify build and test manually** + +```bash +go build ./... +cd /tmp && mkdir test-init && cd test-init +echo "*.pyc" > .gitignore +/path/to/esync init -r user@host:/deploy +cat esync.toml +``` + +**Step 3: Commit** + +```bash +git add cmd/init.go +git commit -m "feat: add smart init command with .gitignore import" +``` + +--- + +### Task 9: CLI Commands — check and edit + +**Files:** +- Create: `cmd/check.go` +- Create: `cmd/edit.go` + +**Step 1: Implement check command** + +```go +// cmd/check.go +package cmd + +import ( + "fmt" + "os" + "path/filepath" + "strings" + + "github.com/charmbracelet/lipgloss" + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" +) + +var checkCmd = &cobra.Command{ + Use: "check", + Short: "Validate config and show file include/exclude preview", + RunE: runCheck, +} + +func init() { + rootCmd.AddCommand(checkCmd) +} + +func runCheck(cmd *cobra.Command, args []string) error { + cfg, err := loadConfig() + if err != nil { + return err + } + return printPreview(cfg) +} + +func loadConfig() (*config.Config, error) { + path := cfgFile + if path == "" { + path = config.FindConfigFile() + } + if path == "" { + return nil, fmt.Errorf("no config file found") + } + return config.Load(path) +} + +func printPreview(cfg *config.Config) error { + green := lipgloss.NewStyle().Foreground(lipgloss.Color("10")) + yellow := lipgloss.NewStyle().Foreground(lipgloss.Color("11")) + dim := lipgloss.NewStyle().Foreground(lipgloss.Color("8")) + + fmt.Println(green.Render(" esync ─ config preview")) + fmt.Printf(" Local: %s\n", cfg.Sync.Local) + fmt.Printf(" Remote: %s\n\n", cfg.Sync.Remote) + + ignores := cfg.AllIgnorePatterns() + + var included []string + var excluded []excludedFile + var totalSize int64 + + localPath := cfg.Sync.Local + filepath.Walk(localPath, func(path string, info os.FileInfo, err error) error { + if err != nil { + return nil + } + rel, _ := filepath.Rel(localPath, path) + if rel == "." { + return nil + } + + for _, pattern := range ignores { + clean := strings.Trim(pattern, "\"[]'") + if strings.HasPrefix(clean, "**/") { + clean = clean[3:] + } + base := filepath.Base(rel) + if matched, _ := filepath.Match(clean, base); matched { + excluded = append(excluded, excludedFile{path: rel, rule: pattern}) + if info.IsDir() { + return filepath.SkipDir + } + return nil + } + if matched, _ := filepath.Match(clean, rel); matched { + excluded = append(excluded, excludedFile{path: rel, rule: pattern}) + if info.IsDir() { + return filepath.SkipDir + } + return nil + } + // Directory pattern matching + if strings.HasSuffix(clean, "/") && info.IsDir() { + dirName := strings.TrimSuffix(clean, "/") + if base == dirName { + excluded = append(excluded, excludedFile{path: rel + "/", rule: pattern}) + return filepath.SkipDir + } + } + } + + if !info.IsDir() { + included = append(included, rel) + totalSize += info.Size() + } + return nil + }) + + // Show included + fmt.Println(green.Render(" Included (sample):")) + shown := min(10, len(included)) + for i := 0; i < shown; i++ { + fmt.Printf(" %s\n", included[i]) + } + if len(included) > shown { + fmt.Printf(" %s\n", dim.Render(fmt.Sprintf("... %d more files", len(included)-shown))) + } + fmt.Println() + + // Show excluded + fmt.Println(yellow.Render(" Excluded by rules:")) + shown = min(10, len(excluded)) + for i := 0; i < shown; i++ { + fmt.Printf(" %-30s %s\n", excluded[i].path, dim.Render("["+excluded[i].rule+"]")) + } + if len(excluded) > shown { + fmt.Printf(" %s\n", dim.Render(fmt.Sprintf("... %d more", len(excluded)-shown))) + } + fmt.Println() + + fmt.Printf(" %s\n", dim.Render(fmt.Sprintf("%d files included (%s) │ %d excluded", + len(included), formatSize(totalSize), len(excluded)))) + + return nil +} + +type excludedFile struct { + path string + rule string +} +``` + +**Step 2: Implement edit command** + +```go +// cmd/edit.go +package cmd + +import ( + "fmt" + "os" + "os/exec" + + "github.com/spf13/cobra" + + "github.com/eloualiche/esync/internal/config" +) + +var editCmd = &cobra.Command{ + Use: "edit", + Short: "Open config in $EDITOR, then show preview", + RunE: runEdit, +} + +func init() { + rootCmd.AddCommand(editCmd) +} + +func runEdit(cmd *cobra.Command, args []string) error { + path := cfgFile + if path == "" { + path = config.FindConfigFile() + } + if path == "" { + return fmt.Errorf("no config file found; run `esync init` first") + } + + editor := os.Getenv("EDITOR") + if editor == "" { + editor = "vi" + } + + for { + // Open editor + c := exec.Command(editor, path) + c.Stdin = os.Stdin + c.Stdout = os.Stdout + c.Stderr = os.Stderr + if err := c.Run(); err != nil { + return fmt.Errorf("editor failed: %w", err) + } + + // Validate and show preview + cfg, err := config.Load(path) + if err != nil { + fmt.Printf("\nConfig error: %v\n", err) + fmt.Print("Press enter to edit again, or q to cancel: ") + var answer string + fmt.Scanln(&answer) + if answer == "q" { + return nil + } + continue + } + + if err := printPreview(cfg); err != nil { + return err + } + + fmt.Print("\nPress enter to accept, e to edit again, q to cancel: ") + var answer string + fmt.Scanln(&answer) + switch answer { + case "e": + continue + case "q": + fmt.Println("Cancelled.") + return nil + default: + fmt.Println("Config accepted.") + return nil + } + } +} +``` + +**Step 3: Verify build** + +```bash +go build ./... +``` + +**Step 4: Commit** + +```bash +git add cmd/check.go cmd/edit.go +git commit -m "feat: add check and edit commands for config validation and preview" +``` + +--- + +### Task 10: CLI Commands — status + +**Files:** +- Create: `cmd/status.go` +- Modify: `cmd/sync.go` (write PID file in daemon mode) + +**Step 1: Implement PID file in daemon mode** + +Add to `runDaemon` in `cmd/sync.go`: +```go +// Write PID file +pidPath := filepath.Join(os.TempDir(), "esync.pid") +os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", os.Getpid())), 0644) +defer os.Remove(pidPath) +``` + +**Step 2: Implement status command** + +```go +// cmd/status.go +package cmd + +import ( + "fmt" + "os" + "path/filepath" + "strconv" + "strings" + "syscall" + + "github.com/spf13/cobra" +) + +var statusCmd = &cobra.Command{ + Use: "status", + Short: "Check if esync daemon is running", + RunE: runStatus, +} + +func init() { + rootCmd.AddCommand(statusCmd) +} + +func runStatus(cmd *cobra.Command, args []string) error { + pidPath := filepath.Join(os.TempDir(), "esync.pid") + data, err := os.ReadFile(pidPath) + if err != nil { + fmt.Println("No esync daemon running.") + return nil + } + + pid, err := strconv.Atoi(strings.TrimSpace(string(data))) + if err != nil { + fmt.Println("No esync daemon running (invalid PID file).") + os.Remove(pidPath) + return nil + } + + // Check if process is alive + process, err := os.FindProcess(pid) + if err != nil { + fmt.Println("No esync daemon running.") + os.Remove(pidPath) + return nil + } + + // On Unix, FindProcess always succeeds. Send signal 0 to check. + if err := process.Signal(syscall.Signal(0)); err != nil { + fmt.Println("No esync daemon running (stale PID file).") + os.Remove(pidPath) + return nil + } + + fmt.Printf("esync daemon running (PID %d)\n", pid) + return nil +} +``` + +**Step 3: Verify build** + +```bash +go build ./... +``` + +**Step 4: Commit** + +```bash +git add cmd/status.go cmd/sync.go +git commit -m "feat: add status command and PID file for daemon mode" +``` + +--- + +### Task 11: Signal Handling and Graceful Shutdown + +**Files:** +- Modify: `cmd/sync.go` + +**Step 1: Add signal handling to daemon mode** + +Replace the `select {}` block at the end of `runDaemon` with: + +```go +sigCh := make(chan os.Signal, 1) +signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM) +<-sigCh +l.Info("stopping", nil) +fmt.Println("\nesync daemon stopped.") +``` + +And import `"os/signal"` and `"syscall"`. + +**Step 2: Test daemon start/stop** + +```bash +go run . sync --daemon -l /tmp/esync-test-src -r /tmp/esync-test-dst & +go run . status +kill %1 +``` + +**Step 3: Commit** + +```bash +git add cmd/sync.go +git commit -m "feat: add graceful shutdown with signal handling" +``` + +--- + +### Task 12: README + +**Files:** +- Modify: `readme.md` + +**Step 1: Write comprehensive README with TOML examples** + +Replace the entire README with documentation covering: + +- What esync does (1 paragraph) +- Installation (go install + binary download) +- Quick start (3 commands) +- Commands reference (sync, init, check, edit, status) +- Configuration reference with full annotated TOML example +- Config file search order +- SSH setup example +- Daemon mode usage +- TUI keyboard shortcuts +- Examples section with 5-6 common use cases + +Ensure thorough TOML examples per user request. + +**Step 2: Commit** + +```bash +git add readme.md +git commit -m "docs: rewrite README for Go version with TOML examples" +``` + +--- + +### Task 13: Integration Testing + +**Files:** +- Create: `integration_test.go` + +**Step 1: Write integration test for local sync** + +```go +// integration_test.go +package main + +import ( + "os" + "path/filepath" + "testing" + "time" + + "github.com/eloualiche/esync/internal/config" + "github.com/eloualiche/esync/internal/syncer" + "github.com/eloualiche/esync/internal/watcher" +) + +func TestLocalSyncIntegration(t *testing.T) { + src := t.TempDir() + dst := t.TempDir() + + // Create a test file + os.WriteFile(filepath.Join(src, "hello.txt"), []byte("hello"), 0644) + + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: src, + Remote: dst, + }, + Settings: config.Settings{ + WatcherDebounce: 100, + Rsync: config.RsyncSettings{ + Archive: true, + Progress: true, + }, + }, + } + + s := syncer.New(cfg) + result, err := s.Run() + if err != nil { + t.Fatalf("sync failed: %v", err) + } + if !result.Success { + t.Fatalf("sync not successful: %s", result.ErrorMessage) + } + + // Verify file was synced + data, err := os.ReadFile(filepath.Join(dst, "hello.txt")) + if err != nil { + t.Fatalf("synced file not found: %v", err) + } + if string(data) != "hello" { + t.Errorf("expected 'hello', got %q", string(data)) + } +} + +func TestWatcherTriggersSync(t *testing.T) { + src := t.TempDir() + dst := t.TempDir() + + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: src, + Remote: dst, + }, + Settings: config.Settings{ + WatcherDebounce: 100, + Rsync: config.RsyncSettings{ + Archive: true, + Progress: true, + }, + }, + } + + s := syncer.New(cfg) + synced := make(chan struct{}, 1) + + handler := func() { + s.Run() + select { + case synced <- struct{}{}: + default: + } + } + + w, err := watcher.New(src, 100, nil, handler) + if err != nil { + t.Fatalf("watcher creation failed: %v", err) + } + if err := w.Start(); err != nil { + t.Fatalf("watcher start failed: %v", err) + } + defer w.Stop() + + // Create a file to trigger sync + time.Sleep(200 * time.Millisecond) // let watcher settle + os.WriteFile(filepath.Join(src, "trigger.txt"), []byte("trigger"), 0644) + + select { + case <-synced: + // Verify + data, err := os.ReadFile(filepath.Join(dst, "trigger.txt")) + if err != nil { + t.Fatalf("file not synced: %v", err) + } + if string(data) != "trigger" { + t.Errorf("expected 'trigger', got %q", string(data)) + } + case <-time.After(5 * time.Second): + t.Fatal("timeout waiting for sync") + } +} +``` + +**Step 2: Run all tests** + +```bash +go test ./... -v +``` +Expected: all PASS + +**Step 3: Commit** + +```bash +git add integration_test.go +git commit -m "test: add integration tests for local sync and watcher" +``` + +--- + +### Task 14: Example Config and Final Polish + +**Files:** +- Create: `esync.toml.example` +- Verify: `go build ./...` and `go vet ./...` + +**Step 1: Create example config** + +Write `esync.toml.example` with the full annotated schema from the design doc. + +**Step 2: Run linting and vet** + +```bash +go vet ./... +go build -o esync . +./esync --help +./esync sync --help +./esync init --help +``` + +**Step 3: Clean up go.sum** + +```bash +go mod tidy +``` + +**Step 4: Final commit** + +```bash +git add esync.toml.example go.mod go.sum +git commit -m "chore: add example config and tidy module" +``` + +--- + +## Execution Order Summary + +| Task | Component | Depends On | +|------|-----------|------------| +| 1 | Project scaffolding | — | +| 2 | Config package | 1 | +| 3 | Syncer package | 2 | +| 4 | Watcher package | 1 | +| 5 | Logger package | 1 | +| 6 | TUI (styles, dashboard, log view) | 1 | +| 7 | CLI sync command | 2, 3, 4, 5, 6 | +| 8 | CLI init command | 2 | +| 9 | CLI check + edit commands | 2, 8 | +| 10 | CLI status command | 7 | +| 11 | Signal handling | 7 | +| 12 | README | all above | +| 13 | Integration tests | 3, 4 | +| 14 | Example config + polish | all above | + +**Parallelizable:** Tasks 2, 4, 5, 6 can run in parallel after Task 1. Tasks 8 and 13 can run in parallel with Task 7. diff --git a/esync.toml.example b/esync.toml.example @@ -0,0 +1,115 @@ +# ============================================================================= +# esync configuration file +# ============================================================================= +# +# Copy this file to one of the following locations: +# ./esync.toml (project-local, highest priority) +# ~/.config/esync/config.toml (user-level) +# /etc/esync/config.toml (system-wide) +# +# esync searches these paths in order and uses the first one found. +# You can also pass an explicit path with: esync --config /path/to/config.toml +# + + +# ----------------------------------------------------------------------------- +# [sync] -- Defines what to sync and where +# ----------------------------------------------------------------------------- +[sync] + +# Local directory to watch and sync FROM. (Required) +local = "." + +# Remote destination to sync TO. (Required) +# For remote targets use scp-style notation: user@host:/path/to/dest +# For local-to-local sync just use an absolute or relative path. +remote = "user@host:/path/to/dest" + +# Polling interval in seconds for the file-system watcher. +# Default: 1 +interval = 1 + + +# ----------------------------------------------------------------------------- +# [sync.ssh] -- SSH connection settings (optional) +# ----------------------------------------------------------------------------- +# Uncomment and configure this section if you need fine-grained control over +# the SSH connection used for remote syncing. When omitted, esync derives +# SSH parameters from the remote string above. + +# [sync.ssh] +# host = "myserver.com" # SSH hostname +# user = "deploy" # SSH username +# port = 22 # SSH port (default: 22) +# identity_file = "~/.ssh/id_ed25519" # Path to private key +# interactive_auth = false # Enable keyboard-interactive / 2FA auth + + +# ----------------------------------------------------------------------------- +# [settings] -- General behaviour tunables +# ----------------------------------------------------------------------------- +[settings] + +# Debounce delay in milliseconds for the file-system watcher. +# Events within this window are coalesced into a single sync. +# Default: 500 +watcher_debounce = 500 + +# Whether to run a full sync immediately when `esync sync` starts, +# before entering the watch loop. +# Default: false +initial_sync = false + +# Global ignore patterns. Matched files/directories are excluded from +# watching AND from rsync. Patterns follow rsync's --exclude syntax. +ignore = [".git", "node_modules", ".DS_Store"] + + +# ----------------------------------------------------------------------------- +# [settings.rsync] -- rsync-specific options +# ----------------------------------------------------------------------------- +[settings.rsync] + +# Use rsync archive mode (-a): preserves symlinks, permissions, timestamps, +# group, owner, and device files. +# Default: true +archive = true + +# Compress data during transfer (-z). +# Default: true +compress = true + +# Keep incremental backups of overwritten files on the remote. +# Default: false +backup = false + +# Directory (relative to the remote root) where backups are stored +# when backup = true. +# Default: ".rsync_backup" +backup_dir = ".rsync_backup" + +# Show per-file transfer progress (--progress). +# Default: true +progress = true + +# Additional raw arguments passed directly to the rsync command. +# Example: ["--delete", "--verbose"] +extra_args = [] + +# Extra rsync-only ignore patterns (appended after settings.ignore). +# These are passed as --exclude flags to rsync but do NOT affect the +# file-system watcher. +ignore = [] + + +# ----------------------------------------------------------------------------- +# [settings.log] -- Logging configuration +# ----------------------------------------------------------------------------- +[settings.log] + +# Path to a log file. When unset, logs go to stderr only. +# file = "/var/log/esync.log" + +# Log format: "text" (human-readable) or "json". +# Default: "text" +format = "text" diff --git a/esync/__init__.py b/esync/__init__.py @@ -1,20 +0,0 @@ -""" -esync - File synchronization tool with watchdog/watchman support -""" - -from .config import SyncConfig, SSHConfig -from .sync_manager import SyncManager -from .watcher_base import WatcherBase -from .watchdog_watcher import WatchdogWatcher -from .watchman_watcher import WatchmanWatcher - -__version__ = "0.1.0" - -__all__ = [ - "SyncConfig", - "SSHConfig", - "SyncManager", - "WatcherBase", - "WatchdogWatcher", - "WatchmanWatcher", -] diff --git a/esync/cli.py b/esync/cli.py @@ -1,378 +0,0 @@ -from pathlib import Path -from typing import Optional, Union -import typer -from enum import Enum -from rich.console import Console -from rich.table import Table -from rich.live import Live - -from .sync_manager import SyncManager -from .watchdog_watcher import WatchdogWatcher -from .watchman_watcher import WatchmanWatcher -from .watcher_base import WatcherBase -from .config import ( - load_config, - find_config_file, - ESyncConfig, - SyncConfig, - SSHConfig, - get_default_config, - create_config_for_paths -) - -app = typer.Typer( - name="esync", - help="File synchronization tool with watchdog/watchman support", - add_completion=False, -) - -verbose_help_init = """ -esync - File synchronization tool - -Basic Usage: - esync init # Initialize a new configuration - esync init -c esync.toml # Create a new configuration file -""" - -verbose_help_sync = """ -esync - File synchronization tool - -Basic Usage: - esync sync # Start syncing with configuration file - esync sync -c esync.toml # Use specific configuration file - esync sync -l ./local -r ./remote # Override paths in config - -Quick Sync: - esync sync --quick -l ./local -r ./remote # Quick sync with default settings - esync sync -q -l ./local -r user@host:/path # Quick sync to remote SSH - -Logging: - esync sync --log sync.log # Log operations to file - esync sync -q -l ./local -r ./remote --log sync.log # Quick sync with logging - -Output Control: - esync sync # Default: clean minimal output with status panel - esync sync --no-quiet # Show more console output - esync sync -v # Enable verbose mode with detailed output - esync sync --log sync.log # Log operations to file - esync sync -v --log sync.log # Detailed logging to file -""" - - - -console = Console() - -class WatcherType(str, Enum): - WATCHDOG = "watchdog" - WATCHMAN = "watchman" - -def create_watcher( - watcher_type: WatcherType, - source_path: Path, - sync_manager: SyncManager -) -> Union[WatchmanWatcher, WatchdogWatcher]: - """Create appropriate watcher based on type.""" - if watcher_type == WatcherType.WATCHDOG: - return WatchdogWatcher(source_path, sync_manager) - return WatchmanWatcher(source_path, sync_manager) -# -------------------------------------------------------------------------------------------------- - - -# -------------------------------------------------------------------------------------------------- -def display_config(config: ESyncConfig) -> None: - """Display the current configuration.""" - table = Table(title="Current Configuration") - table.add_column("Section", style="cyan") - table.add_column("Setting", style="magenta") - table.add_column("Value", style="green") - - sync_data = config.model_dump().get('sync', {}) - - # Local configuration - local_config = sync_data.get("local", {}) - table.add_row("Local", "path", str(local_config.get("path", "Not set"))) - table.add_row("Local", "interval", str(local_config.get("interval", 1))) - - # Remote configuration - remote_config = sync_data.get("remote", {}) - table.add_row("Remote", "path", str(remote_config.get("path", "Not set"))) - if ssh := remote_config.get("ssh"): - table.add_row("Remote", "ssh.host", ssh.get("host", "")) - table.add_row("Remote", "ssh.user", ssh.get("user", "")) - table.add_row("Remote", "ssh.port", str(ssh.get("port", 22))) - - # ESync settings - esync_settings = config.settings.esync - table.add_row("ESync", "watcher", esync_settings.watcher) - if esync_settings.ignore: - table.add_row("ESync", "ignore", "\n".join(esync_settings.ignore)) - - # Rsync settings - rsync_settings = config.settings.rsync - for key, value in rsync_settings.model_dump().items(): - if isinstance(value, list): - value = "\n".join(value) - elif isinstance(value, bool): - value = "✓" if value else "✗" - table.add_row("Rsync", key, str(value)) - - console.print(table) - -@app.callback() -def main(): - """File synchronization tool with watchdog/watchman support.""" - pass -# -------------------------------------------------------------------------------------------------- - - -# -------------------------------------------------------------------------------------------------- -@app.command() -def sync( - ctx: typer.Context, - config_file: Optional[Path] = typer.Option( - None, - "--config", - "-c", - help="Path to TOML config file" - ), - local: Optional[str] = typer.Option( - None, - "--local", - "-l", - help="Local path to sync from" - ), - remote: Optional[str] = typer.Option( - None, - "--remote", - "-r", - help="Remote path to sync to" - ), - watcher: Optional[WatcherType] = typer.Option( - None, - "--watcher", - "-w", - help="Override watcher type" - ), - quick: bool = typer.Option( - False, - "--quick", - "-q", - help="Quick sync with default settings" - ), - log_file: Optional[Path] = typer.Option( - None, - "--log", - help="Path to log file" - ), - quiet: bool = typer.Option( - True, - "--quiet/--no-quiet", - help="Reduce console output (default: quiet)" - ), - verbose: bool = typer.Option( - False, - "--verbose", - "-v", - help="Enable verbose output with detailed logging" - ), - help_override: bool = typer.Option(False, "--help", is_eager=True, help="Show help message"), -): - """Start the file synchronization service.""" - if help_override: - console.print(ctx.get_help(), style="bold") - if verbose: - console.print(verbose_help_sync, style="italic") - raise typer.Exit() - - try: - # Handle quick sync option - if quick: - if not local or not remote: - console.print("[red]Both local and remote paths are required with --quick option[/]") - raise typer.Exit(1) - - local_path = Path(local).expanduser().resolve() - local_path.mkdir(parents=True, exist_ok=True) - remote_path = Path(remote).expanduser().resolve() - - # Create quick configuration - config = create_config_for_paths(local_path, remote_path, watcher.value if watcher else None) - if not quiet: - console.print("[bold blue]Using quick sync configuration[/]") - # Display effective configuration - console.print("\n[bold]Quick Sync Configuration:[/]") - display_config(config) - - - - # else we branch to configuration where we use the configuration toml file - else: - # Find and load config file (original flow) - config_path = config_file or find_config_file() - if not config_path: - console.print("[red]No configuration file found![/]") - console.print("\t[green]Try running 'esync init' to create one.") - console.print("\tOr use 'esync sync --quick -l LOCAL -r REMOTE' for quick syncing.[/]") - raise typer.Exit(1) - - # Show which config file we're using - if not quiet: - console.print(f"[bold blue]Loading configuration from:[/] {config_path.resolve()}") - - try: - config = load_config(config_path) # config is the toml parsed - except Exception as e: - console.print(f"[red]Failed to load config: {e}[/]") - raise typer.Exit(1) - - sync_data = config.model_dump().get('sync', {}) - - # Validate required sections - if ('local' not in sync_data) and (not local): - console.print(f"[red]Invalid configuration: specifications for local required ('sync.local' in {config_path} or -l option)[/]") - raise typer.Exit(1) - if local: - local_path = local # if local is provided via CLI, use it and override the config - else: - local_path = sync_data['local']['path'] - - local_path = Path(local_path).expanduser().resolve() - local_path.mkdir(parents=True, exist_ok=True) - - - if ('remote' not in sync_data) and (not remote): - console.print(f"[red]Invalid configuration: specifications for remote required ('sync.remote' in {config_path} or -r option)[/]") - raise typer.Exit(1) - if remote: - remote_path = remote - else: - remote_path = sync_data['remote']['path'] - remote_path = Path(remote_path).expanduser().resolve() - - if watcher: - config.settings.esync.watcher = watcher.value - - # update the config based on potential overrides above - config.sync['local']['path'] = str(local_path) - config.sync['remote']['path'] = str(remote_path) - - # Display effective configuration - if not quiet: - console.print("\n[bold]Effective Configuration:[/]") - display_config(config) - - # Get sync data from config - sync_data = config.model_dump().get('sync', {}) - - # remote_path.mkdir(parents=True, exist_ok=True) - - - # Create sync configuration - remote_config = sync_data['remote'] - rsync_settings = config.settings.rsync - - if "ssh" in remote_config: - sync_config = SyncConfig( - target=remote_config['path'], - ssh=SSHConfig(**remote_config["ssh"]), - ignores=rsync_settings.ignore + config.settings.esync.ignore, - backup_enabled=rsync_settings.backup_enabled, - backup_dir=rsync_settings.backup_dir, - compress=rsync_settings.compress, - human_readable=rsync_settings.human_readable - ) - else: - sync_config = SyncConfig( - target=remote_config.path, - ignores=rsync_settings.ignore + config.settings.esync.ignore, - backup_enabled=rsync_settings.backup_enabled, - backup_dir=rsync_settings.backup_dir, - compress=rsync_settings.compress, - human_readable=rsync_settings.human_readable - ) - - # Initialize sync manager and watcher - log_file_path = str(log_file) if log_file else None - - # Create sync manager and watcher - sync_manager = SyncManager(sync_config, log_file_path) - # Apply quiet/verbose settings - sync_manager._quiet = quiet - sync_manager._verbose = verbose - watcher = create_watcher( - WatcherType(config.settings.esync.watcher), - local_path, - sync_manager - ) - - if not quiet: - console.print(f"\nStarting {config.settings.esync.watcher} watcher...") - if log_file: - console.print(f"[bold blue]Logging to:[/] {log_file}") - - # Start with Live display - try: - watcher.start() - - if not quiet: - console.print("[bold green]Watcher started successfully. Press Ctrl+C to stop.[/]") - - with Live(sync_manager.status_panel, refresh_per_second=4, console=console) as live: - while True: - live.update(sync_manager.status_panel) - import time - time.sleep(0.5) - - except KeyboardInterrupt: - if not quiet: - console.print("\nStopping watcher...") - watcher.stop() - sync_manager.stop() - if not quiet: - console.print("[bold green]Watcher stopped successfully.[/]") - - except Exception as e: - console.print(f"[red]Error: {str(e)}[/]") - raise typer.Exit(1) -# -------------------------------------------------------------------------------------------------- - - - -# -------------------------------------------------------------------------------------------------- -@app.command() -def init( - ctx: typer.Context, - config_file: Path = typer.Option( - Path("esync.toml"), "--config", "-c", help="Path to create config file" - ), - verbose: bool = typer.Option(False, "--verbose", help="Enable verbose output"), - help_override: bool = typer.Option(False, "--help", is_eager=True, help="Show help message"), - ): - """Initialize a new configuration file.""" - if help_override: - console.print(ctx.get_help(), style="bold") - if verbose: - console.print(verbose_help_init, style="italic") - raise typer.Exit() - - if config_file.exists(): - overwrite = typer.confirm( - f"Config file {config_file} already exists. Overwrite?", - abort=True - ) - - # Get default config from the central location - default_config = get_default_config() - - # Write config to file - import tomli_w - with open(config_file, 'wb') as f: - tomli_w.dump(default_config, f) - - console.print(f"[green]Created config file: {config_file}[/]") -# -------------------------------------------------------------------------------------------------- - - -# -------------------------------------------------------------------------------------------------- -if __name__ == "__main__": - app() diff --git a/esync/config.py b/esync/config.py @@ -1,186 +0,0 @@ -from pathlib import Path -from typing import Optional, List, Dict, Any, Union -from pydantic import BaseModel, Field -import re -import tomli -from rich.console import Console - -console = Console() - -class SSHConfig(BaseModel): - host: str - user: Optional[str] = None - port: int = 22 - allow_password_auth: bool = True - identity_file: Optional[str] = None - interactive_auth: bool = True # Enable interactive authentication prompts - -class SyncConfig(BaseModel): - target: Union[Path, str] - ssh: Optional[SSHConfig] = None - ignores: List[str] = Field(default_factory=list) - backup_enabled: bool = False - backup_dir: str = ".rsync_backup" - compress: bool = True - human_readable: bool = True - verbose: bool = False - - - def is_remote(self) -> bool: - """Check if this is a remote sync configuration.""" - return self.ssh is not None - - def get_target_path(self) -> Path: - """Get the target path as a Path object.""" - if isinstance(self.target, str): - return Path(self.target).expanduser() - return self.target - - - - -class RemoteConfig(BaseModel): - path: Union[Path, str] - ssh: Optional[SSHConfig] = None - -class RsyncSettings(BaseModel): - backup_enabled: bool = True - backup_dir: str = ".rsync_backup" - compression: bool = True - verbose: bool = False - archive: bool = True - compress: bool = True - human_readable: bool = True - progress: bool = True - ignore: List[str] = Field(default_factory=list) - -class ESyncSettings(BaseModel): - watcher: str = "watchdog" - ignore: List[str] = Field(default_factory=list) - -class Settings(BaseModel): - esync: ESyncSettings = Field(default_factory=ESyncSettings) - rsync: RsyncSettings = Field(default_factory=RsyncSettings) - -class ESyncConfig(BaseModel): - sync: Dict[str, Any] = Field(default_factory=dict) - settings: Settings = Field(default_factory=Settings) - -def get_default_config() -> Dict[str, Any]: - """Get the default configuration.""" - return { - "sync": { - "local": { - "path": "./local", - "interval": 1 - }, - "remote": { - "path": "./remote" - } - }, - "settings": { - "esync": { - "watcher": "watchdog", - "ignore": [ - "*.log", - "*.tmp", - ".env" - ] - }, - "rsync": { - "backup_enabled": True, - "backup_dir": ".rsync_backup", - "compression": True, - "verbose": False, - "archive": True, - "compress": True, - "human_readable": True, - "progress": True, - "ignore": [ - "*.swp", - ".git/", - "node_modules/", - "**/__pycache__/", - ] - } - } - } - -def create_config_for_paths(local_path: str, remote_path: str, watcher_type: Optional[str] = None) -> ESyncConfig: - """Create a configuration with specific paths.""" - # Start with default config - config_dict = get_default_config() - - # Update paths - config_dict["sync"]["local"]["path"] = local_path - - # Set watcher if provided - if watcher_type: - config_dict["settings"]["esync"]["watcher"] = watcher_type - - # Handle SSH configuration if needed -> use the function ... that is defined above like is remote path - # check if config is remote - is_remote_ssh = False # check if we have to deal with ssh or not - if ":" in remote_path: - if not ( len(remote_path) >= 2 and remote_path[1] == ':' and remote_path[0].isalpha() ): - is_remote_ssh = True - - # now we split the remote path between ssh case and non ssh case - if is_remote_ssh: - # Extract user, host, and path - user_host, path = remote_path.split(":", 1) - if "@" in user_host: - user, host = user_host.split("@", 1) - config_dict["sync"]["remote"] = { - "path": path, - "ssh": { - "host": host, - "user": user, - "port": 22 - } - } - else: - # No user specified - config_dict["sync"]["remote"] = { - "path": path, - "ssh": { - "host": user_host, - "port": 22 - } - } - else: - # Local path - config_dict["sync"]["remote"]["path"] = remote_path - - return ESyncConfig(**config_dict) - -def load_config(config_path: Path) -> ESyncConfig: - """Load and validate TOML configuration file.""" - try: - with open(config_path, "rb") as f: - config_data = tomli.load(f) - return ESyncConfig(**config_data) - except FileNotFoundError: - console.print(f"[yellow]Config file not found: {config_path}[/]") - raise - except tomli.TOMLDecodeError as e: - console.print(f"[red]Error parsing TOML file: {e}[/]") - raise - except Exception as e: - console.print(f"[red]Error loading config: {e}[/]") - raise - -def get_default_config_paths() -> List[Path]: - """Return list of default config file locations in order of precedence.""" - return [ - Path.cwd() / "esync.toml", # Current directory - Path.home() / ".config" / "esync" / "config.toml", # User config directory - Path("/etc/esync/config.toml"), # System-wide config - ] - -def find_config_file() -> Optional[Path]: - """Find the first available config file from default locations.""" - for path in get_default_config_paths(): - if path.is_file(): - return path - return None diff --git a/esync/sync_manager.py b/esync/sync_manager.py @@ -1,476 +0,0 @@ -import threading -import queue -import subprocess -import datetime -import os -import logging -import re -from pathlib import Path -from typing import Optional, List, Union -from rich.console import Console -from rich.panel import Panel -from rich.text import Text -from .config import SyncConfig - -console = Console() -# console = Console(stderr=True, log_time=True, log_path=False) # for debugging - - -# Customize logger to use shorter log level names -class CustomAdapter(logging.LoggerAdapter): - def process(self, msg, kwargs): - return msg, kwargs - -class ShortLevelNameFormatter(logging.Formatter): - """Custom formatter with shorter level names""" - short_levels = { - 'DEBUG': 'DEBUG', - 'INFO': 'INFO', - 'WARNING': 'WARN', - 'ERROR': 'ERROR', - 'CRITICAL': 'CRITIC' - } - - def format(self, record): - if record.levelname in self.short_levels: - record.levelname = self.short_levels[record.levelname] - return super().format(record) - -class SyncManager: - """Manages file synchronization operations.""" - - def __init__(self, config: SyncConfig, log_file: Optional[str] = None): - """Initialize the sync manager. - - Args: - config: The sync configuration - log_file: Optional path to log file - """ - self._sync_lock = threading.Lock() - self._task_queue = queue.Queue() - self._current_sync: Optional[subprocess.Popen] = None - self._should_stop = threading.Event() - self._sync_thread = threading.Thread(target=self._sync_worker, daemon=True) - self._config = config - self._last_sync_time = None - self._last_sync_status = None - self._sync_count = 0 - - # Default to quiet mode (cleaner output) - self._quiet = True - self._verbose = False - - # Set up logging if log file is specified - self._logger = None - if log_file: - self._setup_logging(log_file) - - # Set verbose/quiet mode based on config - if hasattr(config, 'verbose'): - self._verbose = config.verbose - self._quiet = not config.verbose - - self._sync_thread.start() - - # Single status panel that we'll update - self._status_panel = Panel( - Text("Waiting for changes...", style="italic dim"), - title="Sync Status" - ) - - def _setup_logging(self, log_file: str): - """Set up logging to file.""" - self._logger = logging.getLogger("esync") - self._logger.setLevel(logging.DEBUG) # Always use DEBUG level for logging - - # Create handlers - file_handler = logging.FileHandler(log_file) - file_handler.setLevel(logging.DEBUG) # Always log at DEBUG level - - # Create formatters with fixed-width level names using our custom formatter - formatter = ShortLevelNameFormatter( - '%(asctime)s - %(name)s - %(levelname)-5s - %(message)s', - datefmt='%Y-%m-%d %H:%M:%S' - ) - file_handler.setFormatter(formatter) - - # Add handlers to the logger - self._logger.addHandler(file_handler) - self._logger.info("ESync started") - self._logger.info(f"Log level set to: DEBUG") - - def _log(self, level: str, message: str): - """Log a message if logging is enabled.""" - if self._logger: - if level.lower() == "info": - self._logger.info(message) - elif level.lower() == "warning" or level.lower() == "warn": - self._logger.warning(message) - elif level.lower() == "error" or level.lower() == "err": - self._logger.error(message) - elif level.lower() == "debug" or level.lower() == "dbg": - self._logger.debug(message) - - @property - def config(self) -> SyncConfig: - return self._config - - @property - def last_sync_time(self) -> Optional[datetime.datetime]: - return self._last_sync_time - - @property - def last_sync_status(self) -> Optional[bool]: - return self._last_sync_status - - @property - def sync_count(self) -> int: - return self._sync_count - - @property - def status_panel(self) -> Panel: - return self._status_panel - - def schedule_sync(self, path: Path): - """Schedule a sync task.""" - self._task_queue.put(path) - self._log("info", f"Sync scheduled for {path}") - - def stop(self): - """Stop the sync manager and cleanup.""" - self._should_stop.set() - self._log("info", "Stopping sync manager") - if self._current_sync: - self._current_sync.terminate() - self._sync_thread.join() - - def _update_status(self, status_text: Text): - """Update the status panel with new text.""" - self._status_panel = Panel(status_text, title="Sync Status") - - def _sync_worker(self): - while not self._should_stop.is_set(): - try: - path = self._task_queue.get(timeout=1.0) - except queue.Empty: - continue - - with self._sync_lock: - if self._current_sync is not None: - status_text = Text() - status_text.append("ESync: ", style="bold cyan") - status_text.append(f"Sync #{self._sync_count} ", style="yellow") - status_text.append("in progress - ", style="italic") - status_text.append("waiting for previous sync to complete", style="italic yellow") - self._update_status(status_text) - - self._log("warn", "Sync already in progress, queuing changes...") - if not self._quiet: - console.print("[yellow]Sync already in progress, queuing changes...[/]", highlight=False) - - try: - self._current_sync.wait() - except subprocess.CalledProcessError as e: - error_msg = f"Previous sync failed: {e}" - self._log("error", error_msg) - if not self._quiet: - console.print(f"[bold red]{error_msg}[/]", highlight=False) - - try: - self._perform_sync(path) - except Exception as e: - error_msg = f"Sync error: {e}" - self._log("error", error_msg) - if not self._quiet: - console.print(f"[bold red]{error_msg}[/]", highlight=False) - self._last_sync_status = False - - self._task_queue.task_done() - - def _format_size(self, size_bytes: int) -> str: - """Format bytes into human readable size.""" - if size_bytes < 1024: - return f"{size_bytes} B" - elif size_bytes < 1024 * 1024: - return f"{size_bytes/1024:.1f} KB" - elif size_bytes < 1024 * 1024 * 1024: - return f"{size_bytes/(1024*1024):.1f} MB" - else: - return f"{size_bytes/(1024*1024*1024):.2f} GB" - - def _extract_transferred_files(self, stdout: str) -> List[str]: - """Extract list of transferred files from rsync output.""" - transferred_files = [] - - # Split output into lines and process - for line in stdout.splitlines(): - # Skip common status message lines - if any(pattern in line for pattern in - ['building file list', 'files to consider', 'sent', 'total size', 'bytes/sec']): - continue - - # Most rsync outputs show files on their own line before any stats - # Try to extract just the filename (ignoring paths and stats) - parts = line.strip().split() - if not parts: - continue - - # Files typically appear at the start of lines, before transfer stats - filename = parts[0] - - # Skip purely numeric entries and other likely non-filenames - if (filename.isdigit() or - filename in ['sending', 'sent', 'total', 'building'] or - '%' in filename): - continue - - # Get base filename and add if it looks valid - base_name = os.path.basename(filename) - if base_name and not base_name.isdigit(): - transferred_files.append(base_name) - - # Make sure our list is unique - no duplicates - transferred_files = list(dict.fromkeys(transferred_files)) - - return transferred_files - - def _extract_file_info(self, stdout: str): - """Extract file information from rsync output.""" - stats_lines = [] - total_bytes = 0 - - # Try to extract file count from rsync output - file_count_match = re.search(r'(\d+) files? to consider', stdout) - file_count = file_count_match.group(1) if file_count_match else "0" - - # Extract bytes sent/received - bytes_match = re.search(r'sent ([\d,]+) bytes\s+received ([\d,]+) bytes', stdout) - if bytes_match: - sent = bytes_match.group(1).replace(',', '') - received = bytes_match.group(2).replace(',', '') - try: - total_bytes = int(sent) + int(received) - except ValueError: - # In case of parsing issues - total_bytes = 0 - - # Calculate time taken - time_taken = (datetime.datetime.now() - self._last_sync_time).total_seconds() - time_str = f"{time_taken:.2f}s" - - # Extract transferred files - transferred_files = self._extract_transferred_files(stdout) - - # Format transferred files with truncation if needed - files_summary = "" - if transferred_files: - if len(transferred_files) <= 3: - files_summary = ", ".join(transferred_files) - else: - files_summary = f"{transferred_files[0]}, {transferred_files[1]}, ... +{len(transferred_files)-2} more" - - # Format total bytes in human readable format - size_str = self._format_size(total_bytes) - - # Build stats lines - stats_lines.append(f"Files: {file_count}, Size: {size_str}, Time: {time_str}") - if files_summary: - stats_lines.append(f"Transferred: {files_summary}") - - return stats_lines - - def _log_rsync_output(self, stdout: str): - """Log the rsync stdout in a readable format.""" - if not self._logger: - return - - self._log("debug", "===== RSYNC OUTPUT START =====") - for line in stdout.splitlines(): - if line.strip(): - self._log("debug", f"RSYNC: {line}") - self._log("debug", "===== RSYNC OUTPUT END =====") - - def _perform_sync(self, path: Path): - """Perform the actual sync operation.""" - cmd = self._build_rsync_command(path) - self._sync_count += 1 - self._last_sync_time = datetime.datetime.now() - - # Always log the exact command at DEBUG level - command_str = ' '.join(cmd) - self._log("debug", f"COMMAND: {command_str}") - - status_text = Text() - status_text.append("ESync: ", style="bold cyan") - status_text.append(f"Sync #{self._sync_count} ", style="yellow") - status_text.append("in progress ", style="italic") - status_text.append(f"[{self._last_sync_time.strftime('%Y-%m-%d %H:%M:%S')}]", style="dim") - self._update_status(status_text) - - self._log("info", f"Starting sync #{self._sync_count} from {path}") - - # Always capture stdout to extract info, even without logging - self._current_sync = subprocess.Popen( - cmd, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - text=True - ) - - try: - stdout, stderr = self._current_sync.communicate() - - # Log the full rsync output at DEBUG level - if stdout: - self._log_rsync_output(stdout) - - if self._current_sync.returncode != 0: - if stderr: - self._log("error", f"Rsync stderr: {stderr}") - - self._last_sync_status = False - status_text = Text() - status_text.append("ESync: ", style="bold cyan") - status_text.append(f"Sync #{self._sync_count} ", style="yellow") - status_text.append("failed ", style="bold red") - status_text.append(f"[{datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}]", style="dim") - status_text.append("\n", style="default") - status_text.append(f"Error: {stderr.strip() if stderr else f'Exit code {self._current_sync.returncode}'}", style="red") - self._update_status(status_text) - - error_msg = f"Sync failed with code {self._current_sync.returncode}" - self._log("error", error_msg) - if not self._quiet: - console.print(f"[bold red]✗[/] {error_msg}", highlight=False) - - raise subprocess.CalledProcessError( - self._current_sync.returncode, cmd, stdout, stderr - ) - else: - # Always extract stats even without logging - stats_lines = self._extract_file_info(stdout) - - self._last_sync_status = True - status_text = Text() - status_text.append("ESync: ", style="bold cyan") - status_text.append(f"Sync #{self._sync_count} ", style="yellow") - status_text.append("completed ", style="bold green") - status_text.append(f"[{datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}]", style="dim") - - for stats_line in stats_lines: - status_text.append("\n", style="default") - status_text.append(stats_line, style="green dim") - - self._update_status(status_text) - - self._log("info", f"Sync #{self._sync_count} completed successfully") - if not self._quiet: - console.print(f"[bold green]✓[/] Sync #{self._sync_count} completed successfully", highlight=False) - finally: - self._current_sync = None - - def _parse_remote_string(self, remote_str: str) -> tuple: - """ - Parse a remote string into username, host, and path components. - Format: [user@]host:path - Returns: (username, host, path) - """ - match = re.match(r'^(?:([^@]+)@)?([^:]+):(.+)$', remote_str) - if match: - return match.groups() - return None, None, remote_str - - def _is_remote_path(self, path_str: str) -> bool: - """ - Determine if a string represents a remote path. - A remote path is in the format [user@]host:path. - """ - # Avoid treating Windows paths (C:) as remote - if len(path_str) >= 2 and path_str[1] == ':' and path_str[0].isalpha(): - return False - # Simple regex to match remote path format - return bool(re.match(r'^(?:[^@]+@)?[^/:]+:.+$', path_str)) - - - def _build_rsync_command(self, source_path: Path) -> list[str]: - """Build rsync command for local or remote sync.""" - cmd = [ - "rsync", - "--recursive", # recursive - "--times", # preserve times - "--progress", # progress for parsing - # "--verbose", # verbose for parsing - # "--links", # copy symlinks as symlinks - # "--copy-links", # transform symlink into referent file/dir - "--copy-unsafe-links", # only "unsafe" symlinks are transformed - ] - - # Add backup if enabled - if hasattr(self._config, 'backup_enabled') and self._config.backup_enabled: - cmd.append("--backup") - backup_dir = getattr(self._config, 'backup_dir', '.rsync_backup') - cmd.append(f"--backup-dir={backup_dir}") - - # Add other rsync options if configured - if hasattr(self._config, 'compress') and self._config.compress: - cmd.append("--compress") - if hasattr(self._config, 'human_readable') and self._config.human_readable: - cmd.append("--human-readable") - if hasattr(self._config, 'verbose') and self._config.verbose: - cmd.append("--verbose") - # Todo this is where we add standard rsync commands - - - # Add ignore patterns - for pattern in self._config.ignores: - # Remove any quotes and brackets from the input - clean_pattern = pattern.strip('"[]\'') - # Handle **/ pattern (recursive) - if clean_pattern.startswith('**/'): - clean_pattern = clean_pattern[3:] # Remove **/ prefix - cmd.extend(["--exclude", clean_pattern]) - - # Ensure we have absolute paths for the source - source = f"{source_path.absolute()}/" - - # Get target as string - target_str = str(self._config.target) - # Determine if target is a remote path - is_remote = self._is_remote_path(target_str) - - - if self._config.is_remote(): - # For remote sync via SSH config object - ssh = self._config.ssh - if ssh.user: - remote = f"{ssh.user}@{ssh.host}:{self._config.target}" - else: - remote = f"{ssh.host}:{self._config.target}" - cmd.append(source) - cmd.append(remote) - - elif is_remote: - # For direct remote specification (host:path) - # Use the target string directly without any modification - cmd.append(source) - cmd.append(target_str) - - # Log for debugging - self._log("debug", f"Remote path detected: '{target_str}'") - - else: - # For local sync - try: - target = Path(target_str).expanduser() - target.mkdir(parents=True, exist_ok=True) - except Exception as e: - self._log("error", f"Error creating target directory: {e}") - raise - - cmd.append(source) - cmd.append(str(target) + '/') - - # Log the final command for debugging - self._log("debug", f"Final rsync command: {' '.join(cmd)}") - - return cmd diff --git a/esync/watchdog_watcher.py b/esync/watchdog_watcher.py @@ -1,52 +0,0 @@ -from pathlib import Path -from watchdog.observers import Observer -from watchdog.events import FileSystemEventHandler -from .watcher_base import WatcherBase -from .sync_manager import SyncManager -import fnmatch - -class WatchdogHandler(FileSystemEventHandler): - def __init__(self, root_path: Path, sync_manager: SyncManager): - self.root_path = root_path - self.sync_manager = sync_manager - self.ignores = sync_manager.config.ignores - - def should_ignore(self, path: str) -> bool: - # Convert path to relative path from root for matching - try: - rel_path = Path(path).relative_to(self.root_path) - return any(fnmatch.fnmatch(str(rel_path), pattern.strip('"[]\'')) - for pattern in self.ignores) - except ValueError: - return False - - def on_any_event(self, event): - # Skip directories and temporary files - if event.is_directory or any( - event.src_path.endswith(p) for p in ['.swp', '.swx', '~'] - ): - return - - # Check if file should be ignored - if self.should_ignore(event.src_path): - print(f"Ignoring change to {event.src_path}") - return - - # Only sync if file isn't ignored - self.sync_manager.schedule_sync(self.root_path) - -class WatchdogWatcher(WatcherBase): - def __init__(self, root: Path, sync_manager: SyncManager): - super().__init__(root, sync_manager) - self.handler = WatchdogHandler(root, sync_manager) - self.observer = Observer() - - def start(self) -> None: - """Start watching for changes.""" - self.observer.schedule(self.handler, str(self.root), recursive=True) - self.observer.start() - - def stop(self) -> None: - """Stop watching for changes.""" - self.observer.stop() - self.observer.join() diff --git a/esync/watcher_base.py b/esync/watcher_base.py @@ -1,18 +0,0 @@ -from abc import ABC, abstractmethod -from pathlib import Path -from .sync_manager import SyncManager - -class WatcherBase(ABC): - def __init__(self, root: Path, sync_manager: SyncManager): - self.root = root - self.sync_manager = sync_manager - - @abstractmethod - def start(self) -> None: - """Start watching for changes.""" - pass - - @abstractmethod - def stop(self) -> None: - """Stop watching for changes.""" - pass diff --git a/esync/watchman_watcher.py b/esync/watchman_watcher.py @@ -1,99 +0,0 @@ -# esync/watchman_watcher.py -import pywatchman -from pathlib import Path -from rich.console import Console -from .watcher_base import WatcherBase -from .sync_manager import SyncManager -import time - -console = Console() - -class WatchmanWatcher(WatcherBase): - def __init__(self, root: Path, sync_manager: SyncManager): - super().__init__(root, sync_manager) - self.client = pywatchman.client(timeout=5.0) - self.watch = None - self._stop = False - - def start(self) -> None: - """Start watching for changes.""" - try: - self.client.capabilityCheck(required=['relative_root']) - - root_path = str(self.root.absolute().resolve()) - console.print(f"Starting watchman on {root_path}") - - watch_response = self.client.query('watch-project', root_path) - self.watch = watch_response['watch'] - relative_path = watch_response.get('relative_path') - - clock = self.client.query('clock', self.watch)['clock'] - - expr = self.build_ignore_expression(self.sync_manager.config.ignores) - - sub = { - 'expression': expr, - 'fields': ['name', 'exists', 'type'], - 'since': clock - } - - if relative_path: - sub['relative_root'] = relative_path - - self.client.query('subscribe', self.watch, 'sync-subscription', sub) - console.print("Watchman subscription established") - - while not self._stop: - try: - data = self.client.receive() - if data and 'subscription' in data: - files = data.get('files', []) - if files: - console.print(f"Changes detected: {files}") - self.sync_manager.schedule_sync(self.root) - except pywatchman.SocketTimeout: - continue - except Exception as e: - console.print(f"[red]Error processing watchman event: {e}[/red]") - if not isinstance(e, pywatchman.SocketTimeout): - break - - time.sleep(0.1) - - except Exception as e: - console.print(f"[red]Failed to initialize Watchman: {e}[/red]") - try: - version = self.client.query('version') - console.print(f"Watchman version: {version}") - except Exception as ve: - console.print(f"Could not get version: {ve}") - raise - - def stop(self) -> None: - """Stop watching for changes.""" - self._stop = True - if self.watch: - try: - self.client.query('unsubscribe', self.watch, 'sync-subscription') - except: - pass - try: - self.client.close() - except: - pass - - def build_ignore_expression(self, ignores: list[str]) -> list: - """Build watchman ignore expression from ignore patterns.""" - not_expressions = [ - ['not', ['match', pattern.strip('"[]\'')]] - for pattern in ignores - ] - return ['allof', - ['type', 'f'], # only watch files - *not_expressions, # add all ignore patterns - ['not', ['match', '*.swp']], - ['not', ['match', '*.swx']], - ['not', ['match', '.git/*']], - ['not', ['match', '__pycache__/*']], - ['not', ['match', '*.pyc']] - ] diff --git a/go.mod b/go.mod @@ -0,0 +1,41 @@ +module github.com/eloualiche/esync + +go 1.25.0 + +require ( + github.com/charmbracelet/bubbletea v1.3.10 + github.com/charmbracelet/lipgloss v1.1.0 + github.com/fsnotify/fsnotify v1.9.0 + github.com/spf13/cobra v1.10.2 + github.com/spf13/viper v1.21.0 +) + +require ( + github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect + github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect + github.com/charmbracelet/x/ansi v0.10.1 // indirect + github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect + github.com/charmbracelet/x/term v0.2.1 // indirect + github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/lucasb-eyer/go-colorful v1.2.0 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-localereader v0.0.1 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect + github.com/muesli/cancelreader v0.2.2 // indirect + github.com/muesli/termenv v0.16.0 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/sagikazarmark/locafero v0.11.0 // indirect + github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect + github.com/spf13/afero v1.15.0 // indirect + github.com/spf13/cast v1.10.0 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/sys v0.36.0 // indirect + golang.org/x/text v0.28.0 // indirect +) diff --git a/go.sum b/go.sum @@ -0,0 +1,93 @@ +github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k= +github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8= +github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw= +github.com/charmbracelet/bubbletea v1.3.10/go.mod h1:ORQfo0fk8U+po9VaNvnV95UPWA1BitP1E0N6xJPlHr4= +github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc h1:4pZI35227imm7yK2bGPcfpFEmuY1gc2YSTShr4iJBfs= +github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc/go.mod h1:X4/0JoqgTIPSFcRA/P6INZzIuyqdFY5rm8tb41s9okk= +github.com/charmbracelet/lipgloss v1.1.0 h1:vYXsiLHVkK7fp74RkV7b2kq9+zDLoEU4MZoFqR/noCY= +github.com/charmbracelet/lipgloss v1.1.0/go.mod h1:/6Q8FR2o+kj8rz4Dq0zQc3vYf7X+B0binUUBwA0aL30= +github.com/charmbracelet/x/ansi v0.10.1 h1:rL3Koar5XvX0pHGfovN03f5cxLbCF2YvLeyz7D2jVDQ= +github.com/charmbracelet/x/ansi v0.10.1/go.mod h1:3RQDQ6lDnROptfpWuUVIUG64bD2g2BgntdxH0Ya5TeE= +github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd h1:vy0GVL4jeHEwG5YOXDmi86oYw2yuYUGqz6a8sLwg0X8= +github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs= +github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ= +github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4= +github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= +github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY= +github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4= +github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+EiG4R1k4Cjx5p88= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI= +github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo= +github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA= +github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo= +github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc= +github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8= +github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc= +github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik= +github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw= +github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U= +github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I= +github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg= +github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY= +github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU= +github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no= +github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E= +golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE= +golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k= +golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= +golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/integration_test.go b/integration_test.go @@ -0,0 +1,150 @@ +package main + +import ( + "os" + "path/filepath" + "testing" + "time" + + "github.com/eloualiche/esync/internal/config" + "github.com/eloualiche/esync/internal/syncer" + "github.com/eloualiche/esync/internal/watcher" +) + +// TestLocalSyncIntegration verifies that the Syncer can rsync files between +// two local directories. This test requires rsync to be installed. +func TestLocalSyncIntegration(t *testing.T) { + // Ensure rsync is available + if _, err := os.Stat("/usr/bin/rsync"); err != nil { + if _, err2 := os.Stat("/opt/homebrew/bin/rsync"); err2 != nil { + t.Skip("rsync not found, skipping integration test") + } + } + + // 1. Create two temp dirs (src, dst) + src := t.TempDir() + dst := t.TempDir() + + // 2. Write a test file to src + testContent := "hello from integration test\n" + testFile := filepath.Join(src, "testfile.txt") + if err := os.WriteFile(testFile, []byte(testContent), 0644); err != nil { + t.Fatalf("failed to write test file: %v", err) + } + + // 3. Create a Config pointing src -> dst with archive=true, progress=true + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: src, + Remote: dst, + }, + Settings: config.Settings{ + Rsync: config.RsyncSettings{ + Archive: true, + Progress: true, + }, + }, + } + + // 4. Create a Syncer, run it + s := syncer.New(cfg) + result, err := s.Run() + + // 5. Verify: no error, result.Success is true + if err != nil { + t.Fatalf("syncer.Run() returned error: %v", err) + } + if !result.Success { + t.Fatalf("expected result.Success=true, got false; error: %s", result.ErrorMessage) + } + + // 6. Verify: the file exists in dst with correct contents + dstFile := filepath.Join(dst, "testfile.txt") + got, err := os.ReadFile(dstFile) + if err != nil { + t.Fatalf("failed to read synced file %s: %v", dstFile, err) + } + if string(got) != testContent { + t.Errorf("synced file content mismatch:\n got: %q\n want: %q", string(got), testContent) + } +} + +// TestWatcherTriggersSync verifies that the watcher detects a new file and +// triggers a sync from src to dst. This test requires rsync to be installed. +func TestWatcherTriggersSync(t *testing.T) { + // Ensure rsync is available + if _, err := os.Stat("/usr/bin/rsync"); err != nil { + if _, err2 := os.Stat("/opt/homebrew/bin/rsync"); err2 != nil { + t.Skip("rsync not found, skipping integration test") + } + } + + // 1. Create two temp dirs (src, dst) + src := t.TempDir() + dst := t.TempDir() + + // 2. Create Config and Syncer + cfg := &config.Config{ + Sync: config.SyncSection{ + Local: src, + Remote: dst, + }, + Settings: config.Settings{ + Rsync: config.RsyncSettings{ + Archive: true, + Progress: true, + }, + }, + } + s := syncer.New(cfg) + + // 3. Create watcher with handler that runs syncer and signals a channel + synced := make(chan struct{}, 1) + handler := func() { + _, _ = s.Run() + select { + case synced <- struct{}{}: + default: + } + } + + // Use short debounce (100ms) for fast tests + w, err := watcher.New(src, 100, nil, handler) + if err != nil { + t.Fatalf("watcher.New() failed: %v", err) + } + + // 4. Start watcher + if err := w.Start(); err != nil { + t.Fatalf("watcher.Start() failed: %v", err) + } + defer w.Stop() + + // 5. Wait 200ms for watcher to settle + time.Sleep(200 * time.Millisecond) + + // 6. Write a file to src + testContent := "watcher triggered sync\n" + testFile := filepath.Join(src, "watched.txt") + if err := os.WriteFile(testFile, []byte(testContent), 0644); err != nil { + t.Fatalf("failed to write test file: %v", err) + } + + // 7. Wait for signal (with 5s timeout) + select { + case <-synced: + // success - sync was triggered + case <-time.After(5 * time.Second): + t.Fatal("timed out waiting for watcher to trigger sync") + } + + // 8. Verify: file was synced to dst with correct contents + dstFile := filepath.Join(dst, "watched.txt") + got, err := os.ReadFile(dstFile) + if err != nil { + t.Fatalf("failed to read synced file %s: %v", dstFile, err) + } + if string(got) != testContent { + t.Errorf("synced file content mismatch:\n got: %q\n want: %q", string(got), testContent) + } +} diff --git a/internal/config/config.go b/internal/config/config.go @@ -0,0 +1,212 @@ +// Package config handles loading, validating, and providing defaults for +// esync TOML configuration files. +package config + +import ( + "fmt" + "os" + "strings" + + "github.com/spf13/viper" +) + +// --------------------------------------------------------------------------- +// Structs +// --------------------------------------------------------------------------- + +// SSHConfig holds SSH connection parameters for remote syncing. +type SSHConfig struct { + Host string `mapstructure:"host"` + User string `mapstructure:"user"` + Port int `mapstructure:"port"` + IdentityFile string `mapstructure:"identity_file"` + InteractiveAuth bool `mapstructure:"interactive_auth"` +} + +// SyncSection defines the local/remote pair and optional SSH tunnel. +type SyncSection struct { + Local string `mapstructure:"local"` + Remote string `mapstructure:"remote"` + Interval int `mapstructure:"interval"` + SSH *SSHConfig `mapstructure:"ssh"` +} + +// RsyncSettings controls rsync behaviour. +type RsyncSettings struct { + Archive bool `mapstructure:"archive"` + Compress bool `mapstructure:"compress"` + Backup bool `mapstructure:"backup"` + BackupDir string `mapstructure:"backup_dir"` + Progress bool `mapstructure:"progress"` + ExtraArgs []string `mapstructure:"extra_args"` + Ignore []string `mapstructure:"ignore"` +} + +// LogSettings controls logging output. +type LogSettings struct { + File string `mapstructure:"file"` + Format string `mapstructure:"format"` +} + +// Settings groups watcher, rsync, and log tunables. +type Settings struct { + WatcherDebounce int `mapstructure:"watcher_debounce"` + InitialSync bool `mapstructure:"initial_sync"` + Ignore []string `mapstructure:"ignore"` + Rsync RsyncSettings `mapstructure:"rsync"` + Log LogSettings `mapstructure:"log"` +} + +// Config is the top-level configuration. +type Config struct { + Sync SyncSection `mapstructure:"sync"` + Settings Settings `mapstructure:"settings"` +} + +// --------------------------------------------------------------------------- +// Load +// --------------------------------------------------------------------------- + +// Load reads a TOML configuration file at path, applies defaults, validates +// required fields, and returns the populated Config. +func Load(path string) (*Config, error) { + v := viper.New() + v.SetConfigFile(path) + v.SetConfigType("toml") + + // Defaults + v.SetDefault("sync.interval", 1) + v.SetDefault("settings.watcher_debounce", 500) + v.SetDefault("settings.initial_sync", false) + v.SetDefault("settings.rsync.archive", true) + v.SetDefault("settings.rsync.compress", true) + v.SetDefault("settings.rsync.backup", false) + v.SetDefault("settings.rsync.backup_dir", ".rsync_backup") + v.SetDefault("settings.rsync.progress", true) + v.SetDefault("settings.log.format", "text") + + if err := v.ReadInConfig(); err != nil { + return nil, fmt.Errorf("reading config: %w", err) + } + + var cfg Config + if err := v.Unmarshal(&cfg); err != nil { + return nil, fmt.Errorf("unmarshalling config: %w", err) + } + + // Validation: local and remote are required. + if strings.TrimSpace(cfg.Sync.Local) == "" { + return nil, fmt.Errorf("sync.local is required") + } + if strings.TrimSpace(cfg.Sync.Remote) == "" { + return nil, fmt.Errorf("sync.remote is required") + } + + return &cfg, nil +} + +// --------------------------------------------------------------------------- +// Config file search +// --------------------------------------------------------------------------- + +// FindConfigFile searches the standard locations for an esync config file +// and returns the first one found, or an empty string. +func FindConfigFile() string { + home, _ := os.UserHomeDir() + candidates := []string{ + "./esync.toml", + home + "/.config/esync/config.toml", + "/etc/esync/config.toml", + } + return FindConfigIn(candidates) +} + +// FindConfigIn returns the first path in the list that exists on disk, +// or an empty string if none exist. +func FindConfigIn(paths []string) string { + for _, p := range paths { + if _, err := os.Stat(p); err == nil { + return p + } + } + return "" +} + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +// IsRemote returns true if the configuration targets a remote destination, +// either via an explicit SSH section or a remote string that looks like +// "user@host:/path" or "host:/path". +func (c *Config) IsRemote() bool { + if c.Sync.SSH != nil && c.Sync.SSH.Host != "" { + return true + } + return looksRemote(c.Sync.Remote) +} + +// looksRemote returns true if remote resembles an scp-style address +// (e.g. "user@host:/path" or "host:/path") but not a Windows drive +// letter like "C:/". +func looksRemote(remote string) bool { + idx := strings.Index(remote, ":") + if idx < 0 { + return false + } + // Single letter before colon is a Windows drive letter (e.g. "C:/") + if idx == 1 { + return false + } + return true +} + +// AllIgnorePatterns returns the combined ignore list from both +// settings.ignore and settings.rsync.ignore, in that order. +func (c *Config) AllIgnorePatterns() []string { + combined := make([]string, 0, len(c.Settings.Ignore)+len(c.Settings.Rsync.Ignore)) + combined = append(combined, c.Settings.Ignore...) + combined = append(combined, c.Settings.Rsync.Ignore...) + return combined +} + +// --------------------------------------------------------------------------- +// DefaultTOML +// --------------------------------------------------------------------------- + +// DefaultTOML returns a commented TOML template suitable for writing to a +// new configuration file. +func DefaultTOML() string { + return `# esync configuration file + +[sync] +local = "." +remote = "user@host:/path/to/dest" +interval = 1 + +# [sync.ssh] +# host = "myserver.com" +# user = "deploy" +# port = 22 +# identity_file = "~/.ssh/id_ed25519" +# interactive_auth = false + +[settings] +watcher_debounce = 500 +initial_sync = false +ignore = [".git", "node_modules", ".DS_Store"] + +[settings.rsync] +archive = true +compress = true +backup = false +backup_dir = ".rsync_backup" +progress = true +extra_args = [] +ignore = [] + +[settings.log] +# file = "/var/log/esync.log" +format = "text" +` +} diff --git a/internal/config/config_test.go b/internal/config/config_test.go @@ -0,0 +1,370 @@ +package config + +import ( + "os" + "path/filepath" + "testing" +) + +// --- Helper: write a TOML string to a temp file and return its path --- +func writeTempTOML(t *testing.T, content string) string { + t.Helper() + dir := t.TempDir() + path := filepath.Join(dir, "esync.toml") + if err := os.WriteFile(path, []byte(content), 0644); err != nil { + t.Fatalf("failed to write temp TOML: %v", err) + } + return path +} + +// ----------------------------------------------------------------------- +// 1. TestLoadConfig — full TOML with all fields +// ----------------------------------------------------------------------- +func TestLoadConfig(t *testing.T) { + toml := ` +[sync] +local = "/home/user/project" +remote = "server:/data/project" +interval = 5 + +[settings] +watcher_debounce = 300 +initial_sync = true +ignore = [".git", "node_modules"] + +[settings.rsync] +archive = true +compress = false +backup = true +backup_dir = ".my_backup" +progress = false +extra_args = ["--delete", "--verbose"] +ignore = ["*.tmp", "*.log"] + +[settings.log] +file = "/var/log/esync.log" +format = "json" +` + path := writeTempTOML(t, toml) + cfg, err := Load(path) + if err != nil { + t.Fatalf("Load returned error: %v", err) + } + + // sync section + if cfg.Sync.Local != "/home/user/project" { + t.Errorf("Sync.Local = %q, want %q", cfg.Sync.Local, "/home/user/project") + } + if cfg.Sync.Remote != "server:/data/project" { + t.Errorf("Sync.Remote = %q, want %q", cfg.Sync.Remote, "server:/data/project") + } + if cfg.Sync.Interval != 5 { + t.Errorf("Sync.Interval = %d, want 5", cfg.Sync.Interval) + } + + // settings + if cfg.Settings.WatcherDebounce != 300 { + t.Errorf("Settings.WatcherDebounce = %d, want 300", cfg.Settings.WatcherDebounce) + } + if cfg.Settings.InitialSync != true { + t.Errorf("Settings.InitialSync = %v, want true", cfg.Settings.InitialSync) + } + if len(cfg.Settings.Ignore) != 2 || cfg.Settings.Ignore[0] != ".git" || cfg.Settings.Ignore[1] != "node_modules" { + t.Errorf("Settings.Ignore = %v, want [.git node_modules]", cfg.Settings.Ignore) + } + + // rsync + if cfg.Settings.Rsync.Archive != true { + t.Errorf("Rsync.Archive = %v, want true", cfg.Settings.Rsync.Archive) + } + if cfg.Settings.Rsync.Compress != false { + t.Errorf("Rsync.Compress = %v, want false", cfg.Settings.Rsync.Compress) + } + if cfg.Settings.Rsync.Backup != true { + t.Errorf("Rsync.Backup = %v, want true", cfg.Settings.Rsync.Backup) + } + if cfg.Settings.Rsync.BackupDir != ".my_backup" { + t.Errorf("Rsync.BackupDir = %q, want %q", cfg.Settings.Rsync.BackupDir, ".my_backup") + } + if cfg.Settings.Rsync.Progress != false { + t.Errorf("Rsync.Progress = %v, want false", cfg.Settings.Rsync.Progress) + } + if len(cfg.Settings.Rsync.ExtraArgs) != 2 || cfg.Settings.Rsync.ExtraArgs[0] != "--delete" { + t.Errorf("Rsync.ExtraArgs = %v, want [--delete --verbose]", cfg.Settings.Rsync.ExtraArgs) + } + if len(cfg.Settings.Rsync.Ignore) != 2 || cfg.Settings.Rsync.Ignore[0] != "*.tmp" { + t.Errorf("Rsync.Ignore = %v, want [*.tmp *.log]", cfg.Settings.Rsync.Ignore) + } + + // log + if cfg.Settings.Log.File != "/var/log/esync.log" { + t.Errorf("Log.File = %q, want %q", cfg.Settings.Log.File, "/var/log/esync.log") + } + if cfg.Settings.Log.Format != "json" { + t.Errorf("Log.Format = %q, want %q", cfg.Settings.Log.Format, "json") + } +} + +// ----------------------------------------------------------------------- +// 2. TestLoadConfigWithSSH — TOML with [sync.ssh] section +// ----------------------------------------------------------------------- +func TestLoadConfigWithSSH(t *testing.T) { + toml := ` +[sync] +local = "/home/user/src" +remote = "/data/dest" + +[sync.ssh] +host = "myserver.com" +user = "deploy" +port = 2222 +identity_file = "~/.ssh/id_ed25519" +interactive_auth = true +` + path := writeTempTOML(t, toml) + cfg, err := Load(path) + if err != nil { + t.Fatalf("Load returned error: %v", err) + } + + if cfg.Sync.SSH == nil { + t.Fatal("Sync.SSH is nil, expected SSH config") + } + if cfg.Sync.SSH.Host != "myserver.com" { + t.Errorf("SSH.Host = %q, want %q", cfg.Sync.SSH.Host, "myserver.com") + } + if cfg.Sync.SSH.User != "deploy" { + t.Errorf("SSH.User = %q, want %q", cfg.Sync.SSH.User, "deploy") + } + if cfg.Sync.SSH.Port != 2222 { + t.Errorf("SSH.Port = %d, want 2222", cfg.Sync.SSH.Port) + } + if cfg.Sync.SSH.IdentityFile != "~/.ssh/id_ed25519" { + t.Errorf("SSH.IdentityFile = %q, want %q", cfg.Sync.SSH.IdentityFile, "~/.ssh/id_ed25519") + } + if cfg.Sync.SSH.InteractiveAuth != true { + t.Errorf("SSH.InteractiveAuth = %v, want true", cfg.Sync.SSH.InteractiveAuth) + } + + // IsRemote should return true when SSH is configured + if !cfg.IsRemote() { + t.Error("IsRemote() = false, want true (SSH config present)") + } +} + +// ----------------------------------------------------------------------- +// 3. TestLoadConfigDefaults — minimal TOML, verify defaults applied +// ----------------------------------------------------------------------- +func TestLoadConfigDefaults(t *testing.T) { + toml := ` +[sync] +local = "/src" +remote = "/dst" +` + path := writeTempTOML(t, toml) + cfg, err := Load(path) + if err != nil { + t.Fatalf("Load returned error: %v", err) + } + + if cfg.Sync.Interval != 1 { + t.Errorf("default Sync.Interval = %d, want 1", cfg.Sync.Interval) + } + if cfg.Settings.WatcherDebounce != 500 { + t.Errorf("default WatcherDebounce = %d, want 500", cfg.Settings.WatcherDebounce) + } + if cfg.Settings.InitialSync != false { + t.Errorf("default InitialSync = %v, want false", cfg.Settings.InitialSync) + } + if cfg.Settings.Rsync.Archive != true { + t.Errorf("default Rsync.Archive = %v, want true", cfg.Settings.Rsync.Archive) + } + if cfg.Settings.Rsync.Compress != true { + t.Errorf("default Rsync.Compress = %v, want true", cfg.Settings.Rsync.Compress) + } + if cfg.Settings.Rsync.Backup != false { + t.Errorf("default Rsync.Backup = %v, want false", cfg.Settings.Rsync.Backup) + } + if cfg.Settings.Rsync.BackupDir != ".rsync_backup" { + t.Errorf("default Rsync.BackupDir = %q, want %q", cfg.Settings.Rsync.BackupDir, ".rsync_backup") + } + if cfg.Settings.Rsync.Progress != true { + t.Errorf("default Rsync.Progress = %v, want true", cfg.Settings.Rsync.Progress) + } + if cfg.Settings.Log.Format != "text" { + t.Errorf("default Log.Format = %q, want %q", cfg.Settings.Log.Format, "text") + } + + // SSH should be nil when not specified + if cfg.Sync.SSH != nil { + t.Errorf("Sync.SSH = %v, want nil", cfg.Sync.SSH) + } +} + +// ----------------------------------------------------------------------- +// 4. TestLoadConfigValidation — missing required fields +// ----------------------------------------------------------------------- +func TestLoadConfigValidation(t *testing.T) { + t.Run("missing local", func(t *testing.T) { + toml := ` +[sync] +remote = "/dst" +` + path := writeTempTOML(t, toml) + _, err := Load(path) + if err == nil { + t.Error("expected error for missing local, got nil") + } + }) + + t.Run("missing remote", func(t *testing.T) { + toml := ` +[sync] +local = "/src" +` + path := writeTempTOML(t, toml) + _, err := Load(path) + if err == nil { + t.Error("expected error for missing remote, got nil") + } + }) + + t.Run("missing both", func(t *testing.T) { + toml := ` +[settings] +watcher_debounce = 100 +` + path := writeTempTOML(t, toml) + _, err := Load(path) + if err == nil { + t.Error("expected error for missing local and remote, got nil") + } + }) +} + +// ----------------------------------------------------------------------- +// 5. TestIsRemote — various remote patterns +// ----------------------------------------------------------------------- +func TestIsRemote(t *testing.T) { + tests := []struct { + name string + remote string + ssh *SSHConfig + want bool + }{ + {"user@host:/path", "user@host:/path", nil, true}, + {"host:/path", "host:/path", nil, true}, + {"local relative", "./local", nil, false}, + {"local absolute", "/absolute", nil, false}, + {"windows path", "C:/windows", nil, false}, + {"ssh config present", "/data/dest", &SSHConfig{Host: "myserver"}, true}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + cfg := &Config{ + Sync: SyncSection{ + Remote: tt.remote, + SSH: tt.ssh, + }, + } + got := cfg.IsRemote() + if got != tt.want { + t.Errorf("IsRemote() = %v, want %v (remote=%q, ssh=%v)", got, tt.want, tt.remote, tt.ssh) + } + }) + } +} + +// ----------------------------------------------------------------------- +// 6. TestFindConfigFile / TestFindConfigFileNotFound +// ----------------------------------------------------------------------- +func TestFindConfigFile(t *testing.T) { + dir := t.TempDir() + configPath := filepath.Join(dir, "esync.toml") + if err := os.WriteFile(configPath, []byte("[sync]\n"), 0644); err != nil { + t.Fatal(err) + } + + found := FindConfigIn([]string{ + filepath.Join(dir, "nonexistent.toml"), + configPath, + "/also/nonexistent.toml", + }) + if found != configPath { + t.Errorf("FindConfigIn = %q, want %q", found, configPath) + } +} + +func TestFindConfigFileNotFound(t *testing.T) { + found := FindConfigIn([]string{ + "/does/not/exist/esync.toml", + "/also/nonexistent/config.toml", + }) + if found != "" { + t.Errorf("FindConfigIn = %q, want empty string", found) + } +} + +// ----------------------------------------------------------------------- +// 7. TestAllIgnorePatterns — combines both ignore lists +// ----------------------------------------------------------------------- +func TestAllIgnorePatterns(t *testing.T) { + cfg := &Config{ + Settings: Settings{ + Ignore: []string{".git", "node_modules"}, + Rsync: RsyncSettings{ + Ignore: []string{"*.tmp", "*.log"}, + }, + }, + } + + patterns := cfg.AllIgnorePatterns() + expected := []string{".git", "node_modules", "*.tmp", "*.log"} + + if len(patterns) != len(expected) { + t.Fatalf("AllIgnorePatterns length = %d, want %d", len(patterns), len(expected)) + } + for i, p := range patterns { + if p != expected[i] { + t.Errorf("AllIgnorePatterns[%d] = %q, want %q", i, p, expected[i]) + } + } +} + +func TestAllIgnorePatternsEmpty(t *testing.T) { + cfg := &Config{} + patterns := cfg.AllIgnorePatterns() + if len(patterns) != 0 { + t.Errorf("AllIgnorePatterns = %v, want empty", patterns) + } +} + +// ----------------------------------------------------------------------- +// 8. TestDefaultTOML — returns a non-empty template +// ----------------------------------------------------------------------- +func TestDefaultTOML(t *testing.T) { + toml := DefaultTOML() + if toml == "" { + t.Error("DefaultTOML() returned empty string") + } + // Should contain key sections + for _, section := range []string{"[sync]", "[settings]", "[settings.rsync]", "[settings.log]"} { + if !containsString(toml, section) { + t.Errorf("DefaultTOML() missing section %q", section) + } + } +} + +func containsString(s, substr string) bool { + return len(s) >= len(substr) && searchString(s, substr) +} + +func searchString(s, substr string) bool { + for i := 0; i <= len(s)-len(substr); i++ { + if s[i:i+len(substr)] == substr { + return true + } + } + return false +} diff --git a/internal/logger/logger.go b/internal/logger/logger.go @@ -0,0 +1,128 @@ +package logger + +import ( + "encoding/json" + "fmt" + "os" + "sort" + "sync" + "time" +) + +// Logger writes structured log entries to a file in either JSON or text format. +type Logger struct { + file *os.File + format string // "json" or "text" + mu sync.Mutex +} + +// New opens (or creates) a log file at path for append-only writing and returns +// a Logger. The format parameter must be "json" or "text"; an empty string +// defaults to "text". +func New(path string, format string) (*Logger, error) { + if format == "" { + format = "text" + } + + f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644) + if err != nil { + return nil, fmt.Errorf("logger: open %s: %w", path, err) + } + + return &Logger{ + file: f, + format: format, + }, nil +} + +// Close closes the underlying log file. +func (l *Logger) Close() error { + l.mu.Lock() + defer l.mu.Unlock() + return l.file.Close() +} + +// Info logs an info-level entry. +func (l *Logger) Info(event string, fields map[string]interface{}) { + l.log("info", event, fields) +} + +// Warn logs a warn-level entry. +func (l *Logger) Warn(event string, fields map[string]interface{}) { + l.log("warn", event, fields) +} + +// Error logs an error-level entry. +func (l *Logger) Error(event string, fields map[string]interface{}) { + l.log("error", event, fields) +} + +// Debug logs a debug-level entry. +func (l *Logger) Debug(event string, fields map[string]interface{}) { + l.log("debug", event, fields) +} + +// levelTag maps internal level names to short text-format tags. +var levelTag = map[string]string{ + "info": "INF", + "warn": "WRN", + "error": "ERR", + "debug": "DBG", +} + +// log writes a single log entry in the configured format. +func (l *Logger) log(level, event string, fields map[string]interface{}) { + l.mu.Lock() + defer l.mu.Unlock() + + ts := time.Now().Format("15:04:05") + + switch l.format { + case "json": + l.writeJSON(ts, level, event, fields) + default: + l.writeText(ts, level, event, fields) + } +} + +// writeJSON writes a single JSON log line. +func (l *Logger) writeJSON(ts, level, event string, fields map[string]interface{}) { + entry := make(map[string]interface{}, len(fields)+3) + entry["time"] = ts + entry["level"] = level + entry["event"] = event + for k, v := range fields { + entry[k] = v + } + + data, err := json.Marshal(entry) + if err != nil { + // Best-effort fallback: write the error itself. + fmt.Fprintf(l.file, `{"time":%q,"level":"error","event":"log_marshal_error","error":%q}`+"\n", ts, err.Error()) + return + } + l.file.Write(data) + l.file.Write([]byte("\n")) +} + +// writeText writes a single text log line in the format: +// +// 15:04:05 INF event key=value key2=value2 +func (l *Logger) writeText(ts, level, event string, fields map[string]interface{}) { + tag := levelTag[level] + + // Sort field keys for deterministic output. + keys := make([]string, 0, len(fields)) + for k := range fields { + keys = append(keys, k) + } + sort.Strings(keys) + + line := fmt.Sprintf("%s %s %s", ts, tag, event) + for _, k := range keys { + line += fmt.Sprintf(" %s=%v", k, fields[k]) + } + line += "\n" + + l.file.WriteString(line) +} diff --git a/internal/logger/logger_test.go b/internal/logger/logger_test.go @@ -0,0 +1,200 @@ +package logger + +import ( + "encoding/json" + "os" + "path/filepath" + "strings" + "testing" +) + +func TestJSONLogger(t *testing.T) { + // Create a temp file for the log + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + lg, err := New(logPath, "json") + if err != nil { + t.Fatalf("New() returned error: %v", err) + } + defer lg.Close() + + // Write an info entry with fields + lg.Info("synced", map[string]interface{}{ + "file": "main.go", + "size": 2150, + }) + + // Read the file contents + data, err := os.ReadFile(logPath) + if err != nil { + t.Fatalf("failed to read log file: %v", err) + } + + line := strings.TrimSpace(string(data)) + if line == "" { + t.Fatal("log file is empty") + } + + // Parse as JSON + var entry map[string]interface{} + if err := json.Unmarshal([]byte(line), &entry); err != nil { + t.Fatalf("log entry is not valid JSON: %v\nline: %s", err, line) + } + + // Verify required fields + if entry["level"] != "info" { + t.Errorf("expected level=info, got %v", entry["level"]) + } + if entry["event"] != "synced" { + t.Errorf("expected event=synced, got %v", entry["event"]) + } + if entry["file"] != "main.go" { + t.Errorf("expected file=main.go, got %v", entry["file"]) + } + // JSON numbers are float64 by default + if entry["size"] != float64(2150) { + t.Errorf("expected size=2150, got %v", entry["size"]) + } + if _, ok := entry["time"]; !ok { + t.Error("expected time field to be present") + } +} + +func TestTextLogger(t *testing.T) { + // Create a temp file for the log + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + lg, err := New(logPath, "text") + if err != nil { + t.Fatalf("New() returned error: %v", err) + } + defer lg.Close() + + // Write an info entry with fields + lg.Info("synced", map[string]interface{}{ + "file": "main.go", + "size": 2150, + }) + + // Read the file contents + data, err := os.ReadFile(logPath) + if err != nil { + t.Fatalf("failed to read log file: %v", err) + } + + line := strings.TrimSpace(string(data)) + if line == "" { + t.Fatal("log file is empty") + } + + // Verify text format contains INF and event name + if !strings.Contains(line, "INF") { + t.Errorf("expected line to contain 'INF', got: %s", line) + } + if !strings.Contains(line, "synced") { + t.Errorf("expected line to contain 'synced', got: %s", line) + } + if !strings.Contains(line, "file=main.go") { + t.Errorf("expected line to contain 'file=main.go', got: %s", line) + } +} + +func TestDefaultFormatIsText(t *testing.T) { + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + lg, err := New(logPath, "") + if err != nil { + t.Fatalf("New() returned error: %v", err) + } + defer lg.Close() + + if lg.format != "text" { + t.Errorf("expected default format to be 'text', got %q", lg.format) + } +} + +func TestWarnLevel(t *testing.T) { + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + lg, err := New(logPath, "text") + if err != nil { + t.Fatalf("New() returned error: %v", err) + } + defer lg.Close() + + lg.Warn("disk_low", map[string]interface{}{ + "pct": 95, + }) + + data, err := os.ReadFile(logPath) + if err != nil { + t.Fatalf("failed to read log file: %v", err) + } + + line := strings.TrimSpace(string(data)) + if !strings.Contains(line, "WRN") { + t.Errorf("expected line to contain 'WRN', got: %s", line) + } + if !strings.Contains(line, "disk_low") { + t.Errorf("expected line to contain 'disk_low', got: %s", line) + } +} + +func TestErrorLevel(t *testing.T) { + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + lg, err := New(logPath, "json") + if err != nil { + t.Fatalf("New() returned error: %v", err) + } + defer lg.Close() + + lg.Error("connection_failed", map[string]interface{}{ + "host": "example.com", + }) + + data, err := os.ReadFile(logPath) + if err != nil { + t.Fatalf("failed to read log file: %v", err) + } + + var entry map[string]interface{} + if err := json.Unmarshal([]byte(strings.TrimSpace(string(data))), &entry); err != nil { + t.Fatalf("not valid JSON: %v", err) + } + + if entry["level"] != "error" { + t.Errorf("expected level=error, got %v", entry["level"]) + } +} + +func TestDebugLevel(t *testing.T) { + dir := t.TempDir() + logPath := filepath.Join(dir, "test.log") + + lg, err := New(logPath, "text") + if err != nil { + t.Fatalf("New() returned error: %v", err) + } + defer lg.Close() + + lg.Debug("trace_check", nil) + + data, err := os.ReadFile(logPath) + if err != nil { + t.Fatalf("failed to read log file: %v", err) + } + + line := strings.TrimSpace(string(data)) + if !strings.Contains(line, "DBG") { + t.Errorf("expected line to contain 'DBG', got: %s", line) + } + if !strings.Contains(line, "trace_check") { + t.Errorf("expected line to contain 'trace_check', got: %s", line) + } +} diff --git a/internal/syncer/syncer.go b/internal/syncer/syncer.go @@ -0,0 +1,249 @@ +// Package syncer builds and executes rsync commands based on esync +// configuration, handling local and remote (SSH) destinations. +package syncer + +import ( + "fmt" + "os/exec" + "regexp" + "strconv" + "strings" + "time" + + "github.com/eloualiche/esync/internal/config" +) + +// --------------------------------------------------------------------------- +// Types +// --------------------------------------------------------------------------- + +// Result captures the outcome of a sync operation. +type Result struct { + Success bool + FilesCount int + BytesTotal int64 + Duration time.Duration + Files []string + ErrorMessage string +} + +// Syncer builds and executes rsync commands from a config.Config. +type Syncer struct { + cfg *config.Config + DryRun bool +} + +// --------------------------------------------------------------------------- +// Constructor +// --------------------------------------------------------------------------- + +// New returns a Syncer configured from the given Config. +func New(cfg *config.Config) *Syncer { + return &Syncer{cfg: cfg} +} + +// --------------------------------------------------------------------------- +// Public methods +// --------------------------------------------------------------------------- + +// BuildCommand constructs the rsync argument list with all flags, excludes, +// SSH options, extra_args, source (trailing /), and destination. +func (s *Syncer) BuildCommand() []string { + args := []string{"rsync", "--recursive", "--times", "--progress", "--copy-unsafe-links"} + + rsync := s.cfg.Settings.Rsync + + // Conditional flags + if rsync.Archive { + args = append(args, "--archive") + } + if rsync.Compress { + args = append(args, "--compress") + } + if rsync.Backup { + args = append(args, "--backup") + if rsync.BackupDir != "" { + args = append(args, "--backup-dir="+rsync.BackupDir) + } + } + if s.DryRun { + args = append(args, "--dry-run") + } + + // Exclude patterns (strip **/ prefix) + for _, pattern := range s.cfg.AllIgnorePatterns() { + cleaned := strings.TrimPrefix(pattern, "**/") + args = append(args, "--exclude="+cleaned) + } + + // Extra args passthrough + args = append(args, rsync.ExtraArgs...) + + // SSH transport + if sshCmd := s.buildSSHCommand(); sshCmd != "" { + args = append(args, "-e", sshCmd) + } + + // Source (must end with /) + source := s.cfg.Sync.Local + if !strings.HasSuffix(source, "/") { + source += "/" + } + args = append(args, source) + + // Destination + args = append(args, s.buildDestination()) + + return args +} + +// Run executes the rsync command, captures output, and parses stats. +func (s *Syncer) Run() (*Result, error) { + args := s.BuildCommand() + + start := time.Now() + + // args[0] is "rsync", the rest are arguments + cmd := exec.Command(args[0], args[1:]...) + output, err := cmd.CombinedOutput() + duration := time.Since(start) + + outStr := string(output) + + result := &Result{ + Duration: duration, + Files: s.extractFiles(outStr), + } + + count, bytes := s.extractStats(outStr) + result.FilesCount = count + result.BytesTotal = bytes + + if err != nil { + result.Success = false + result.ErrorMessage = fmt.Sprintf("rsync error: %v\n%s", err, outStr) + return result, err + } + + result.Success = true + return result, nil +} + +// --------------------------------------------------------------------------- +// Private helpers +// --------------------------------------------------------------------------- + +// buildSSHCommand builds the SSH command string with port, identity file, +// and ControlMaster keepalive options. Returns empty string if no SSH config. +func (s *Syncer) buildSSHCommand() string { + ssh := s.cfg.Sync.SSH + if ssh == nil || ssh.Host == "" { + return "" + } + + parts := []string{"ssh"} + + if ssh.Port != 0 { + parts = append(parts, fmt.Sprintf("-p %d", ssh.Port)) + } + + if ssh.IdentityFile != "" { + parts = append(parts, fmt.Sprintf("-i %s", ssh.IdentityFile)) + } + + // ControlMaster keepalive options + parts = append(parts, + "-o ControlMaster=auto", + "-o ControlPath=/tmp/esync-ssh-%r@%h:%p", + "-o ControlPersist=600", + ) + + return strings.Join(parts, " ") +} + +// buildDestination builds the destination string from SSH config or the raw +// remote string. When SSH config is present, it constructs user@host:path. +func (s *Syncer) buildDestination() string { + ssh := s.cfg.Sync.SSH + if ssh == nil || ssh.Host == "" { + return s.cfg.Sync.Remote + } + + remote := s.cfg.Sync.Remote + if ssh.User != "" { + return fmt.Sprintf("%s@%s:%s", ssh.User, ssh.Host, remote) + } + return fmt.Sprintf("%s:%s", ssh.Host, remote) +} + +// extractFiles extracts file names from rsync output. +// rsync lists transferred files one per line between the header +// "sending incremental file list" and the blank line before stats. +func (s *Syncer) extractFiles(output string) []string { + var files []string + lines := strings.Split(output, "\n") + inList := false + + for _, line := range lines { + line = strings.TrimSpace(line) + + if line == "sending incremental file list" { + inList = true + continue + } + + if !inList { + continue + } + + // End of file list: blank line or stats line + if line == "" || strings.HasPrefix(line, "sent ") || strings.HasPrefix(line, "total size") { + if line == "" { + continue + } + break + } + + // Skip progress lines (contain % or bytes/sec mid-line) + if strings.Contains(line, "%") || strings.Contains(line, "bytes/sec") { + continue + } + + // Skip lines starting with "Number of" (stats) + if strings.HasPrefix(line, "Number of") { + break + } + + files = append(files, line) + } + + return files +} + +// extractStats extracts the file count and total bytes from rsync output. +// It looks for "Number of regular files transferred: N" and +// "Total file size: N bytes" patterns. +func (s *Syncer) extractStats(output string) (int, int64) { + var count int + var totalBytes int64 + + // Match "Number of regular files transferred: 3" + reCount := regexp.MustCompile(`Number of regular files transferred:\s*([\d,]+)`) + if m := reCount.FindStringSubmatch(output); len(m) > 1 { + cleaned := strings.ReplaceAll(m[1], ",", "") + if n, err := strconv.Atoi(cleaned); err == nil { + count = n + } + } + + // Match "Total file size: 5,678 bytes" + reBytes := regexp.MustCompile(`Total file size:\s*([\d,]+)`) + if m := reBytes.FindStringSubmatch(output); len(m) > 1 { + cleaned := strings.ReplaceAll(m[1], ",", "") + if n, err := strconv.ParseInt(cleaned, 10, 64); err == nil { + totalBytes = n + } + } + + return count, totalBytes +} diff --git a/internal/syncer/syncer_test.go b/internal/syncer/syncer_test.go @@ -0,0 +1,340 @@ +package syncer + +import ( + "strings" + "testing" + + "github.com/eloualiche/esync/internal/config" +) + +// --------------------------------------------------------------------------- +// Helper: build a minimal Config for testing +// --------------------------------------------------------------------------- +func minimalConfig(local, remote string) *config.Config { + return &config.Config{ + Sync: config.SyncSection{ + Local: local, + Remote: remote, + }, + Settings: config.Settings{ + Rsync: config.RsyncSettings{ + Archive: true, + Compress: true, + Progress: true, + }, + }, + } +} + +// --------------------------------------------------------------------------- +// 1. TestBuildCommand_Local — verify rsync flags for local sync +// --------------------------------------------------------------------------- +func TestBuildCommand_Local(t *testing.T) { + cfg := minimalConfig("/home/user/src", "/data/dest") + + s := New(cfg) + cmd := s.BuildCommand() + + // Should start with rsync + if cmd[0] != "rsync" { + t.Errorf("cmd[0] = %q, want %q", cmd[0], "rsync") + } + + // Must contain base flags + for _, flag := range []string{"--recursive", "--times", "--progress", "--copy-unsafe-links"} { + if !containsArg(cmd, flag) { + t.Errorf("missing flag %q in %v", flag, cmd) + } + } + + // Archive and compress are true by default + if !containsArg(cmd, "--archive") { + t.Error("missing --archive flag") + } + if !containsArg(cmd, "--compress") { + t.Error("missing --compress flag") + } + + // Source must end with / + source := cmd[len(cmd)-2] + if !strings.HasSuffix(source, "/") { + t.Errorf("source = %q, must end with /", source) + } + if source != "/home/user/src/" { + t.Errorf("source = %q, want %q", source, "/home/user/src/") + } + + // Destination is last argument + dest := cmd[len(cmd)-1] + if dest != "/data/dest" { + t.Errorf("destination = %q, want %q", dest, "/data/dest") + } + + // No -e flag for local sync without SSH config + if containsArgPrefix(cmd, "-e") { + t.Error("should not have -e flag for local sync") + } +} + +// --------------------------------------------------------------------------- +// 2. TestBuildCommand_Remote — verify remote destination format +// --------------------------------------------------------------------------- +func TestBuildCommand_Remote(t *testing.T) { + cfg := minimalConfig("/home/user/src", "user@server:/data/dest") + + s := New(cfg) + cmd := s.BuildCommand() + + // Destination should be the raw remote string + dest := cmd[len(cmd)-1] + if dest != "user@server:/data/dest" { + t.Errorf("destination = %q, want %q", dest, "user@server:/data/dest") + } +} + +// --------------------------------------------------------------------------- +// 3. TestBuildCommand_SSHConfig — verify -e flag with SSH options +// --------------------------------------------------------------------------- +func TestBuildCommand_SSHConfig(t *testing.T) { + cfg := minimalConfig("/home/user/src", "/data/dest") + cfg.Sync.SSH = &config.SSHConfig{ + Host: "myserver.com", + User: "deploy", + Port: 2222, + IdentityFile: "~/.ssh/id_ed25519", + } + + s := New(cfg) + cmd := s.BuildCommand() + + // Should contain -e flag + eIdx := indexOfArg(cmd, "-e") + if eIdx < 0 { + t.Fatal("missing -e flag") + } + + // The SSH command string follows -e + sshCmd := cmd[eIdx+1] + if !strings.Contains(sshCmd, "ssh") { + t.Errorf("SSH command should start with ssh, got %q", sshCmd) + } + if !strings.Contains(sshCmd, "-p 2222") { + t.Errorf("SSH command missing port, got %q", sshCmd) + } + if !strings.Contains(sshCmd, "-i ~/.ssh/id_ed25519") { + t.Errorf("SSH command missing identity file, got %q", sshCmd) + } + // ControlMaster options + if !strings.Contains(sshCmd, "-o ControlMaster=auto") { + t.Errorf("SSH command missing ControlMaster, got %q", sshCmd) + } + if !strings.Contains(sshCmd, "-o ControlPath=/tmp/esync-ssh-%r@%h:%p") { + t.Errorf("SSH command missing ControlPath, got %q", sshCmd) + } + if !strings.Contains(sshCmd, "-o ControlPersist=600") { + t.Errorf("SSH command missing ControlPersist, got %q", sshCmd) + } + + // Destination should be user@host:/path when SSH is configured + dest := cmd[len(cmd)-1] + if dest != "deploy@myserver.com:/data/dest" { + t.Errorf("destination = %q, want %q", dest, "deploy@myserver.com:/data/dest") + } +} + +// --------------------------------------------------------------------------- +// 4. TestBuildCommand_ExcludePatterns — verify --exclude for combined patterns +// --------------------------------------------------------------------------- +func TestBuildCommand_ExcludePatterns(t *testing.T) { + cfg := minimalConfig("/src", "/dst") + cfg.Settings.Ignore = []string{".git", "node_modules"} + cfg.Settings.Rsync.Ignore = []string{"**/*.tmp", "*.log"} + + s := New(cfg) + cmd := s.BuildCommand() + + // Should have --exclude for each pattern + // **/*.tmp should be stripped to *.tmp + expectedExcludes := []string{".git", "node_modules", "*.tmp", "*.log"} + for _, pattern := range expectedExcludes { + expected := "--exclude=" + pattern + if !containsArg(cmd, expected) { + t.Errorf("missing %q in %v", expected, cmd) + } + } +} + +// --------------------------------------------------------------------------- +// 5. TestBuildCommand_ExtraArgs — verify passthrough of extra_args +// --------------------------------------------------------------------------- +func TestBuildCommand_ExtraArgs(t *testing.T) { + cfg := minimalConfig("/src", "/dst") + cfg.Settings.Rsync.ExtraArgs = []string{"--delete", "--verbose"} + + s := New(cfg) + cmd := s.BuildCommand() + + if !containsArg(cmd, "--delete") { + t.Errorf("missing --delete in %v", cmd) + } + if !containsArg(cmd, "--verbose") { + t.Errorf("missing --verbose in %v", cmd) + } +} + +// --------------------------------------------------------------------------- +// 6. TestBuildCommand_DryRun — verify --dry-run flag +// --------------------------------------------------------------------------- +func TestBuildCommand_DryRun(t *testing.T) { + cfg := minimalConfig("/src", "/dst") + + s := New(cfg) + s.DryRun = true + cmd := s.BuildCommand() + + if !containsArg(cmd, "--dry-run") { + t.Errorf("missing --dry-run in %v", cmd) + } +} + +// --------------------------------------------------------------------------- +// 7. TestBuildCommand_Backup — verify --backup and --backup-dir flags +// --------------------------------------------------------------------------- +func TestBuildCommand_Backup(t *testing.T) { + cfg := minimalConfig("/src", "/dst") + cfg.Settings.Rsync.Backup = true + cfg.Settings.Rsync.BackupDir = ".my_backup" + + s := New(cfg) + cmd := s.BuildCommand() + + if !containsArg(cmd, "--backup") { + t.Errorf("missing --backup in %v", cmd) + } + if !containsArg(cmd, "--backup-dir=.my_backup") { + t.Errorf("missing --backup-dir=.my_backup in %v", cmd) + } +} + +// --------------------------------------------------------------------------- +// Additional tests for helper functions +// --------------------------------------------------------------------------- + +func TestExtractFiles(t *testing.T) { + output := `sending incremental file list +src/main.go +src/utils.go +config.toml + +sent 1,234 bytes received 56 bytes 2,580.00 bytes/sec +total size is 5,678 speedup is 4.40 +` + s := New(minimalConfig("/src", "/dst")) + files := s.extractFiles(output) + + // Should extract the file lines (not the header or stats) + if len(files) != 3 { + t.Fatalf("extractFiles returned %d files, want 3: %v", len(files), files) + } + expected := []string{"src/main.go", "src/utils.go", "config.toml"} + for i, f := range files { + if f != expected[i] { + t.Errorf("files[%d] = %q, want %q", i, f, expected[i]) + } + } +} + +func TestExtractStats(t *testing.T) { + output := `sending incremental file list +src/main.go + +Number of files: 10 +Number of regular files transferred: 3 +Total file size: 5,678 bytes +sent 1,234 bytes received 56 bytes 2,580.00 bytes/sec +total size is 5,678 speedup is 4.40 +` + s := New(minimalConfig("/src", "/dst")) + count, bytes := s.extractStats(output) + + if count != 3 { + t.Errorf("extractStats count = %d, want 3", count) + } + if bytes != 5678 { + t.Errorf("extractStats bytes = %d, want 5678", bytes) + } +} + +func TestBuildSSHCommand_NoSSH(t *testing.T) { + cfg := minimalConfig("/src", "/dst") + s := New(cfg) + sshCmd := s.buildSSHCommand() + if sshCmd != "" { + t.Errorf("buildSSHCommand() = %q, want empty string for no SSH config", sshCmd) + } +} + +func TestBuildDestination_Local(t *testing.T) { + cfg := minimalConfig("/src", "/dst") + s := New(cfg) + dest := s.buildDestination() + if dest != "/dst" { + t.Errorf("buildDestination() = %q, want %q", dest, "/dst") + } +} + +func TestBuildDestination_SSHWithUser(t *testing.T) { + cfg := minimalConfig("/src", "/remote/path") + cfg.Sync.SSH = &config.SSHConfig{ + Host: "myserver.com", + User: "deploy", + } + s := New(cfg) + dest := s.buildDestination() + if dest != "deploy@myserver.com:/remote/path" { + t.Errorf("buildDestination() = %q, want %q", dest, "deploy@myserver.com:/remote/path") + } +} + +func TestBuildDestination_SSHWithoutUser(t *testing.T) { + cfg := minimalConfig("/src", "/remote/path") + cfg.Sync.SSH = &config.SSHConfig{ + Host: "myserver.com", + } + s := New(cfg) + dest := s.buildDestination() + if dest != "myserver.com:/remote/path" { + t.Errorf("buildDestination() = %q, want %q", dest, "myserver.com:/remote/path") + } +} + +// --------------------------------------------------------------------------- +// Test helpers +// --------------------------------------------------------------------------- + +func containsArg(args []string, target string) bool { + for _, a := range args { + if a == target { + return true + } + } + return false +} + +func containsArgPrefix(args []string, prefix string) bool { + for _, a := range args { + if strings.HasPrefix(a, prefix) { + return true + } + } + return false +} + +func indexOfArg(args []string, target string) int { + for i, a := range args { + if a == target { + return i + } + } + return -1 +} diff --git a/internal/tui/app.go b/internal/tui/app.go @@ -0,0 +1,161 @@ +package tui + +import ( + tea "github.com/charmbracelet/bubbletea" +) + +// --------------------------------------------------------------------------- +// View enum +// --------------------------------------------------------------------------- + +type view int + +const ( + viewDashboard view = iota + viewLogs +) + +// --------------------------------------------------------------------------- +// AppModel — root Bubbletea model +// --------------------------------------------------------------------------- + +// AppModel is the root Bubbletea model that switches between the dashboard +// and log views. +type AppModel struct { + dashboard DashboardModel + logView LogViewModel + current view + syncEvents chan SyncEvent + logEntries chan LogEntry +} + +// NewApp creates a new AppModel wired to the given local and remote paths. +func NewApp(local, remote string) *AppModel { + return &AppModel{ + dashboard: NewDashboard(local, remote), + logView: NewLogView(), + current: viewDashboard, + syncEvents: make(chan SyncEvent, 64), + logEntries: make(chan LogEntry, 64), + } +} + +// SyncEventChan returns a send-only channel for pushing sync events into +// the TUI from external code. +func (m *AppModel) SyncEventChan() chan<- SyncEvent { + return m.syncEvents +} + +// LogEntryChan returns a send-only channel for pushing log entries into +// the TUI from external code. +func (m *AppModel) LogEntryChan() chan<- LogEntry { + return m.logEntries +} + +// --------------------------------------------------------------------------- +// tea.Model interface +// --------------------------------------------------------------------------- + +// Init returns a batch of the dashboard init command and the two channel +// listener commands. +func (m AppModel) Init() tea.Cmd { + return tea.Batch( + m.dashboard.Init(), + m.listenSyncEvents(), + m.listenLogEntries(), + ) +} + +// Update delegates messages to the active view and handles global keys. +func (m AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { + switch msg := msg.(type) { + + case tea.KeyMsg: + // Global: quit from any view. + switch msg.String() { + case "q", "ctrl+c": + // Let the current view handle it if it's filtering. + if m.current == viewDashboard && m.dashboard.filtering { + break + } + if m.current == viewLogs && m.logView.filtering { + break + } + return m, tea.Quit + case "l": + // Toggle view (only when not filtering). + if m.current == viewDashboard && !m.dashboard.filtering { + m.current = viewLogs + return m, nil + } + if m.current == viewLogs && !m.logView.filtering { + m.current = viewDashboard + return m, nil + } + } + + case SyncEventMsg: + // Dispatch to dashboard and re-listen. + var cmd tea.Cmd + m.dashboard, cmd = m.dashboard.Update(msg) + return m, tea.Batch(cmd, m.listenSyncEvents()) + + case LogEntryMsg: + // Dispatch to log view and re-listen. + var cmd tea.Cmd + m.logView, cmd = m.logView.Update(msg) + return m, tea.Batch(cmd, m.listenLogEntries()) + + case tea.WindowSizeMsg: + // Propagate to both views. + m.dashboard, _ = m.dashboard.Update(msg) + m.logView, _ = m.logView.Update(msg) + return m, nil + } + + // Delegate remaining messages to the active view. + switch m.current { + case viewDashboard: + var cmd tea.Cmd + m.dashboard, cmd = m.dashboard.Update(msg) + return m, cmd + case viewLogs: + var cmd tea.Cmd + m.logView, cmd = m.logView.Update(msg) + return m, cmd + } + + return m, nil +} + +// View renders the currently active view. +func (m AppModel) View() string { + switch m.current { + case viewLogs: + return m.logView.View() + default: + return m.dashboard.View() + } +} + +// --------------------------------------------------------------------------- +// Channel listeners +// --------------------------------------------------------------------------- + +// listenSyncEvents returns a Cmd that blocks until a SyncEvent arrives on +// the channel, then wraps it as a SyncEventMsg. +func (m AppModel) listenSyncEvents() tea.Cmd { + ch := m.syncEvents + return func() tea.Msg { + return SyncEventMsg(<-ch) + } +} + +// listenLogEntries returns a Cmd that blocks until a LogEntry arrives on +// the channel, then wraps it as a LogEntryMsg. +func (m AppModel) listenLogEntries() tea.Cmd { + ch := m.logEntries + return func() tea.Msg { + return LogEntryMsg(<-ch) + } +} diff --git a/internal/tui/dashboard.go b/internal/tui/dashboard.go @@ -0,0 +1,272 @@ +package tui + +import ( + "fmt" + "strings" + "time" + + tea "github.com/charmbracelet/bubbletea" +) + +// --------------------------------------------------------------------------- +// Messages +// --------------------------------------------------------------------------- + +// tickMsg is sent on every one-second tick for periodic refresh. +type tickMsg time.Time + +// SyncEventMsg carries a single sync event into the TUI. +type SyncEventMsg SyncEvent + +// SyncStatsMsg carries aggregate sync statistics. +type SyncStatsMsg struct { + TotalSynced int + TotalBytes string + TotalErrors int +} + +// --------------------------------------------------------------------------- +// Types +// --------------------------------------------------------------------------- + +// SyncEvent represents a single file sync operation. +type SyncEvent struct { + File string + Size string + Duration time.Duration + Status string // "synced", "syncing", "error" + Time time.Time +} + +// DashboardModel is the main TUI view showing sync status and recent events. +type DashboardModel struct { + local, remote string + status string // "watching", "syncing", "paused", "error" + lastSync time.Time + events []SyncEvent + totalSynced int + totalBytes string + totalErrors int + width, height int + filter string + filtering bool +} + +// --------------------------------------------------------------------------- +// Constructor +// --------------------------------------------------------------------------- + +// NewDashboard returns a DashboardModel configured with the given local and +// remote paths. +func NewDashboard(local, remote string) DashboardModel { + return DashboardModel{ + local: local, + remote: remote, + status: "watching", + totalBytes: "0B", + } +} + +// --------------------------------------------------------------------------- +// tea.Model interface +// --------------------------------------------------------------------------- + +// Init starts the periodic tick timer. +func (m DashboardModel) Init() tea.Cmd { + return tea.Tick(time.Second, func(t time.Time) tea.Msg { + return tickMsg(t) + }) +} + +// Update handles messages for the dashboard view. +func (m DashboardModel) Update(msg tea.Msg) (DashboardModel, tea.Cmd) { + switch msg := msg.(type) { + + case tea.KeyMsg: + if m.filtering { + return m.updateFiltering(msg) + } + return m.updateNormal(msg) + + case tickMsg: + // Re-arm the ticker. + return m, tea.Tick(time.Second, func(t time.Time) tea.Msg { + return tickMsg(t) + }) + + case SyncEventMsg: + evt := SyncEvent(msg) + // Prepend; cap at 100. + m.events = append([]SyncEvent{evt}, m.events...) + if len(m.events) > 100 { + m.events = m.events[:100] + } + if evt.Status == "synced" { + m.lastSync = evt.Time + } + return m, nil + + case SyncStatsMsg: + m.totalSynced = msg.TotalSynced + m.totalBytes = msg.TotalBytes + m.totalErrors = msg.TotalErrors + return m, nil + + case tea.WindowSizeMsg: + m.width = msg.Width + m.height = msg.Height + return m, nil + } + + return m, nil +} + +// updateNormal handles keys when NOT in filtering mode. +func (m DashboardModel) updateNormal(msg tea.KeyMsg) (DashboardModel, tea.Cmd) { + switch msg.String() { + case "q", "ctrl+c": + return m, tea.Quit + case "p": + if m.status == "paused" { + m.status = "watching" + } else { + m.status = "paused" + } + case "/": + m.filtering = true + m.filter = "" + } + return m, nil +} + +// updateFiltering handles keys when in filtering mode. +func (m DashboardModel) updateFiltering(msg tea.KeyMsg) (DashboardModel, tea.Cmd) { + switch msg.Type { + case tea.KeyEnter: + m.filtering = false + case tea.KeyEscape: + m.filter = "" + m.filtering = false + case tea.KeyBackspace: + if len(m.filter) > 0 { + m.filter = m.filter[:len(m.filter)-1] + } + default: + if len(msg.String()) == 1 { + m.filter += msg.String() + } + } + return m, nil +} + +// View renders the dashboard. +func (m DashboardModel) View() string { + var b strings.Builder + + // --- Header --- + header := titleStyle.Render(" esync ") + dimStyle.Render(strings.Repeat("─", max(0, m.width-8))) + b.WriteString(header + "\n") + b.WriteString(fmt.Sprintf(" %s → %s\n", m.local, m.remote)) + + // Status line + statusIcon, statusText := m.statusDisplay() + agoText := "" + if !m.lastSync.IsZero() { + ago := time.Since(m.lastSync).Truncate(time.Second) + agoText = fmt.Sprintf(" (synced %s ago)", ago) + } + b.WriteString(fmt.Sprintf(" %s %s%s\n", statusIcon, statusText, dimStyle.Render(agoText))) + b.WriteString("\n") + + // --- Recent events --- + b.WriteString(" " + titleStyle.Render("Recent") + " " + dimStyle.Render(strings.Repeat("─", max(0, m.width-11))) + "\n") + + filtered := m.filteredEvents() + visible := min(len(filtered), max(0, m.height-10)) + for i := 0; i < visible; i++ { + evt := filtered[i] + b.WriteString(" " + m.renderEvent(evt) + "\n") + } + b.WriteString("\n") + + // --- Stats --- + b.WriteString(" " + titleStyle.Render("Stats") + " " + dimStyle.Render(strings.Repeat("─", max(0, m.width-10))) + "\n") + stats := fmt.Sprintf(" %d synced │ %s total │ %d errors", + m.totalSynced, m.totalBytes, m.totalErrors) + b.WriteString(stats + "\n") + b.WriteString("\n") + + // --- Help / filter --- + if m.filtering { + b.WriteString(helpStyle.Render(fmt.Sprintf(" filter: %s█ (enter apply esc clear)", m.filter))) + } else { + help := " q quit p pause r full resync l logs d dry-run / filter" + if m.filter != "" { + help += fmt.Sprintf(" [filter: %s]", m.filter) + } + b.WriteString(helpStyle.Render(help)) + } + b.WriteString("\n") + + return b.String() +} + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +// statusDisplay returns the icon and styled text for the current status. +func (m DashboardModel) statusDisplay() (string, string) { + switch m.status { + case "watching": + return statusSynced.Render("●"), statusSynced.Render("Watching") + case "syncing": + return statusSyncing.Render("⟳"), statusSyncing.Render("Syncing") + case "paused": + return dimStyle.Render("⏸"), dimStyle.Render("Paused") + case "error": + return statusError.Render("✗"), statusError.Render("Error") + default: + return "?", m.status + } +} + +// renderEvent formats a single sync event line. +func (m DashboardModel) renderEvent(evt SyncEvent) string { + switch evt.Status { + case "synced": + name := padRight(evt.File, 30) + return statusSynced.Render("✓") + " " + name + dimStyle.Render(fmt.Sprintf("%8s %5s", evt.Size, evt.Duration.Truncate(100*time.Millisecond))) + case "syncing": + name := padRight(evt.File, 30) + return statusSyncing.Render("⟳") + " " + name + statusSyncing.Render("syncing...") + case "error": + name := padRight(evt.File, 30) + return statusError.Render("✗") + " " + name + statusError.Render("error") + default: + return evt.File + } +} + +// filteredEvents returns events matching the current filter (case-insensitive). +func (m DashboardModel) filteredEvents() []SyncEvent { + if m.filter == "" { + return m.events + } + lf := strings.ToLower(m.filter) + var out []SyncEvent + for _, evt := range m.events { + if strings.Contains(strings.ToLower(evt.File), lf) { + out = append(out, evt) + } + } + return out +} + +// padRight pads s with spaces to width n, truncating if necessary. +func padRight(s string, n int) string { + if len(s) >= n { + return s[:n] + } + return s + strings.Repeat(" ", n-len(s)) +} diff --git a/internal/tui/logview.go b/internal/tui/logview.go @@ -0,0 +1,202 @@ +package tui + +import ( + "fmt" + "strings" + "time" + + tea "github.com/charmbracelet/bubbletea" +) + +// --------------------------------------------------------------------------- +// Messages +// --------------------------------------------------------------------------- + +// LogEntryMsg carries a log entry into the TUI. +type LogEntryMsg LogEntry + +// --------------------------------------------------------------------------- +// Types +// --------------------------------------------------------------------------- + +// LogEntry represents a single log line. +type LogEntry struct { + Time time.Time + Level string // "INF", "WRN", "ERR" + Message string +} + +// LogViewModel is a scrollable log view. It is not a standalone tea.Model; +// its Update and View methods are called by AppModel. +type LogViewModel struct { + entries []LogEntry + offset int + width int + height int + filter string + filtering bool +} + +// --------------------------------------------------------------------------- +// Constructor +// --------------------------------------------------------------------------- + +// NewLogView returns an empty LogViewModel. +func NewLogView() LogViewModel { + return LogViewModel{} +} + +// --------------------------------------------------------------------------- +// Update / View (not tea.Model — managed by AppModel) +// --------------------------------------------------------------------------- + +// Update handles messages for the log view. +func (m LogViewModel) Update(msg tea.Msg) (LogViewModel, tea.Cmd) { + switch msg := msg.(type) { + + case tea.KeyMsg: + if m.filtering { + return m.updateFiltering(msg) + } + return m.updateNormal(msg) + + case LogEntryMsg: + entry := LogEntry(msg) + m.entries = append(m.entries, entry) + if len(m.entries) > 1000 { + m.entries = m.entries[len(m.entries)-1000:] + } + return m, nil + + case tea.WindowSizeMsg: + m.width = msg.Width + m.height = msg.Height + return m, nil + } + + return m, nil +} + +// updateNormal handles keys when NOT in filtering mode. +func (m LogViewModel) updateNormal(msg tea.KeyMsg) (LogViewModel, tea.Cmd) { + filtered := m.filteredEntries() + switch msg.String() { + case "up", "k": + if m.offset > 0 { + m.offset-- + } + case "down", "j": + maxOffset := max(0, len(filtered)-m.viewHeight()) + if m.offset < maxOffset { + m.offset++ + } + case "/": + m.filtering = true + m.filter = "" + m.offset = 0 + } + return m, nil +} + +// updateFiltering handles keys when in filtering mode. +func (m LogViewModel) updateFiltering(msg tea.KeyMsg) (LogViewModel, tea.Cmd) { + switch msg.Type { + case tea.KeyEnter: + m.filtering = false + case tea.KeyEscape: + m.filter = "" + m.filtering = false + case tea.KeyBackspace: + if len(m.filter) > 0 { + m.filter = m.filter[:len(m.filter)-1] + } + default: + if len(msg.String()) == 1 { + m.filter += msg.String() + } + } + m.offset = 0 + return m, nil +} + +// View renders the log view. +func (m LogViewModel) View() string { + var b strings.Builder + + // Header + header := titleStyle.Render(" esync ─ logs ") + dimStyle.Render(strings.Repeat("─", max(0, m.width-15))) + b.WriteString(header + "\n") + + // Log lines + filtered := m.filteredEntries() + vh := m.viewHeight() + start := m.offset + end := min(start+vh, len(filtered)) + + for i := start; i < end; i++ { + entry := filtered[i] + ts := entry.Time.Format("15:04:05") + lvl := m.styledLevel(entry.Level) + b.WriteString(fmt.Sprintf(" %s %s %s\n", dimStyle.Render(ts), lvl, entry.Message)) + } + + // Pad remaining lines + rendered := end - start + for i := rendered; i < vh; i++ { + b.WriteString("\n") + } + + // Help / filter + if m.filtering { + b.WriteString(helpStyle.Render(fmt.Sprintf(" filter: %s█ (enter apply esc clear)", m.filter))) + } else { + help := " ↑↓ scroll / filter l back q quit" + if m.filter != "" { + help += fmt.Sprintf(" [filter: %s]", m.filter) + } + b.WriteString(helpStyle.Render(help)) + } + b.WriteString("\n") + + return b.String() +} + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +// viewHeight returns the number of log lines visible (total height minus +// header and help bar). +func (m LogViewModel) viewHeight() int { + return max(1, m.height-3) +} + +// styledLevel returns the level string with appropriate color. +func (m LogViewModel) styledLevel(level string) string { + switch level { + case "INF": + return statusSynced.Render("INF") + case "WRN": + return statusSyncing.Render("WRN") + case "ERR": + return statusError.Render("ERR") + default: + return dimStyle.Render(level) + } +} + +// filteredEntries returns log entries matching the current filter +// (case-insensitive match on Message). +func (m LogViewModel) filteredEntries() []LogEntry { + if m.filter == "" { + return m.entries + } + lf := strings.ToLower(m.filter) + var out []LogEntry + for _, e := range m.entries { + if strings.Contains(strings.ToLower(e.Message), lf) { + out = append(out, e) + } + } + return out +} diff --git a/internal/tui/styles.go b/internal/tui/styles.go @@ -0,0 +1,18 @@ +// Package tui provides a terminal user interface for esync built on +// Bubbletea and Lipgloss. +package tui + +import "github.com/charmbracelet/lipgloss" + +// --------------------------------------------------------------------------- +// Lipgloss styles +// --------------------------------------------------------------------------- + +var ( + titleStyle = lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("12")) + statusSynced = lipgloss.NewStyle().Foreground(lipgloss.Color("10")) + statusSyncing = lipgloss.NewStyle().Foreground(lipgloss.Color("11")) + statusError = lipgloss.NewStyle().Foreground(lipgloss.Color("9")) + dimStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("8")) + helpStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("8")) +) diff --git a/internal/watcher/watcher.go b/internal/watcher/watcher.go @@ -0,0 +1,222 @@ +// Package watcher monitors a directory tree for file-system changes using +// fsnotify and debounces rapid events into a single callback. +package watcher + +import ( + "os" + "path/filepath" + "sync" + "time" + + "github.com/fsnotify/fsnotify" +) + +// --------------------------------------------------------------------------- +// EventHandler +// --------------------------------------------------------------------------- + +// EventHandler is called after the debounce window closes. +type EventHandler func() + +// --------------------------------------------------------------------------- +// Debouncer +// --------------------------------------------------------------------------- + +// Debouncer batches rapid events into a single callback invocation. +// Each call to Trigger resets the timer; the callback fires only after +// the debounce interval elapses with no new triggers. +type Debouncer struct { + interval time.Duration + callback func() + timer *time.Timer + mu sync.Mutex + stopped bool +} + +// NewDebouncer creates a Debouncer that will invoke callback after interval +// of inactivity following the most recent Trigger call. +func NewDebouncer(interval time.Duration, callback func()) *Debouncer { + return &Debouncer{ + interval: interval, + callback: callback, + } +} + +// Trigger resets the debounce timer. When the timer fires (after interval of +// inactivity), the callback is invoked. +func (d *Debouncer) Trigger() { + d.mu.Lock() + defer d.mu.Unlock() + + if d.stopped { + return + } + + if d.timer != nil { + d.timer.Stop() + } + + d.timer = time.AfterFunc(d.interval, func() { + d.mu.Lock() + stopped := d.stopped + d.mu.Unlock() + if !stopped { + d.callback() + } + }) +} + +// Stop cancels any pending callback. After Stop returns, no further callbacks +// will be invoked even if Trigger is called again. +func (d *Debouncer) Stop() { + d.mu.Lock() + defer d.mu.Unlock() + + d.stopped = true + if d.timer != nil { + d.timer.Stop() + } +} + +// --------------------------------------------------------------------------- +// Watcher +// --------------------------------------------------------------------------- + +// Watcher monitors a directory tree for file-system changes using fsnotify. +// Events are debounced so that a burst of rapid changes results in a single +// call to the configured handler. +type Watcher struct { + fsw *fsnotify.Watcher + debouncer *Debouncer + path string + ignores []string + done chan struct{} +} + +// New creates a Watcher for the given directory path. debounceMs sets the +// debounce interval in milliseconds (defaults to 500 if 0). ignores is a +// list of filepath.Match patterns to skip. handler is called after each +// debounced event batch. +func New(path string, debounceMs int, ignores []string, handler EventHandler) (*Watcher, error) { + if debounceMs <= 0 { + debounceMs = 500 + } + + fsw, err := fsnotify.NewWatcher() + if err != nil { + return nil, err + } + + w := &Watcher{ + fsw: fsw, + path: path, + ignores: ignores, + done: make(chan struct{}), + } + + w.debouncer = NewDebouncer(time.Duration(debounceMs)*time.Millisecond, handler) + + return w, nil +} + +// Start adds the watched path recursively and launches the event loop in a +// background goroutine. +func (w *Watcher) Start() error { + if err := w.addRecursive(w.path); err != nil { + return err + } + + go w.eventLoop() + return nil +} + +// Stop shuts down the watcher: cancels the debouncer, closes fsnotify, and +// waits for the event loop goroutine to exit. +func (w *Watcher) Stop() { + w.debouncer.Stop() + w.fsw.Close() + <-w.done +} + +// --------------------------------------------------------------------------- +// Private methods +// --------------------------------------------------------------------------- + +// eventLoop reads fsnotify events and errors until the watcher is closed. +func (w *Watcher) eventLoop() { + defer close(w.done) + + for { + select { + case event, ok := <-w.fsw.Events: + if !ok { + return + } + + // Only react to meaningful operations + if !isRelevantOp(event.Op) { + continue + } + + if w.shouldIgnore(event.Name) { + continue + } + + // If a new directory was created, watch it recursively + if event.Op&fsnotify.Create != 0 { + if info, err := os.Stat(event.Name); err == nil && info.IsDir() { + _ = w.addRecursive(event.Name) + } + } + + w.debouncer.Trigger() + + case _, ok := <-w.fsw.Errors: + if !ok { + return + } + // Errors are logged but do not stop the loop. + } + } +} + +// isRelevantOp returns true for file-system operations we care about. +func isRelevantOp(op fsnotify.Op) bool { + return op&(fsnotify.Write|fsnotify.Create|fsnotify.Remove|fsnotify.Rename) != 0 +} + +// shouldIgnore checks the base name of path against all ignore patterns +// using filepath.Match. +func (w *Watcher) shouldIgnore(path string) bool { + base := filepath.Base(path) + for _, pattern := range w.ignores { + if matched, _ := filepath.Match(pattern, base); matched { + return true + } + } + return false +} + +// addRecursive walks the directory tree rooted at path and adds every +// directory to the fsnotify watcher. Individual files are not added +// because fsnotify watches directories for events on their contents. +func (w *Watcher) addRecursive(path string) error { + return filepath.Walk(path, func(p string, info os.FileInfo, err error) error { + if err != nil { + return nil // skip entries we cannot stat + } + + if w.shouldIgnore(p) { + if info.IsDir() { + return filepath.SkipDir + } + return nil + } + + if info.IsDir() { + return w.fsw.Add(p) + } + + return nil + }) +} diff --git a/internal/watcher/watcher_test.go b/internal/watcher/watcher_test.go @@ -0,0 +1,112 @@ +package watcher + +import ( + "sync/atomic" + "testing" + "time" +) + +// --------------------------------------------------------------------------- +// 1. TestDebouncerBatchesEvents — rapid events produce exactly one callback +// --------------------------------------------------------------------------- +func TestDebouncerBatchesEvents(t *testing.T) { + var count atomic.Int64 + + d := NewDebouncer(100*time.Millisecond, func() { + count.Add(1) + }) + defer d.Stop() + + // Fire 5 events rapidly, 10ms apart + for i := 0; i < 5; i++ { + d.Trigger() + time.Sleep(10 * time.Millisecond) + } + + // Wait for debounce window to expire plus margin + time.Sleep(200 * time.Millisecond) + + got := count.Load() + if got != 1 { + t.Errorf("callback fired %d times, want 1", got) + } +} + +// --------------------------------------------------------------------------- +// 2. TestDebouncerSeparateEvents — two events separated by more than the +// debounce interval should fire the callback twice +// --------------------------------------------------------------------------- +func TestDebouncerSeparateEvents(t *testing.T) { + var count atomic.Int64 + + d := NewDebouncer(50*time.Millisecond, func() { + count.Add(1) + }) + defer d.Stop() + + // First event + d.Trigger() + // Wait for the debounce to fire + time.Sleep(150 * time.Millisecond) + + // Second event + d.Trigger() + // Wait for the debounce to fire + time.Sleep(150 * time.Millisecond) + + got := count.Load() + if got != 2 { + t.Errorf("callback fired %d times, want 2", got) + } +} + +// --------------------------------------------------------------------------- +// 3. TestDebouncerStopCancelsPending — Stop prevents a pending callback +// --------------------------------------------------------------------------- +func TestDebouncerStopCancelsPending(t *testing.T) { + var count atomic.Int64 + + d := NewDebouncer(100*time.Millisecond, func() { + count.Add(1) + }) + + d.Trigger() + // Stop before the debounce interval elapses + time.Sleep(20 * time.Millisecond) + d.Stop() + + // Wait past the debounce interval + time.Sleep(200 * time.Millisecond) + + got := count.Load() + if got != 0 { + t.Errorf("callback fired %d times after Stop, want 0", got) + } +} + +// --------------------------------------------------------------------------- +// 4. TestShouldIgnore — verify ignore pattern matching +// --------------------------------------------------------------------------- +func TestShouldIgnore(t *testing.T) { + w := &Watcher{ + ignores: []string{".git", "*.tmp", "node_modules"}, + } + + tests := []struct { + path string + expect bool + }{ + {"/project/.git", true}, + {"/project/foo.tmp", true}, + {"/project/node_modules", true}, + {"/project/main.go", false}, + {"/project/src/app.go", false}, + } + + for _, tt := range tests { + got := w.shouldIgnore(tt.path) + if got != tt.expect { + t.Errorf("shouldIgnore(%q) = %v, want %v", tt.path, got, tt.expect) + } + } +} diff --git a/main.go b/main.go @@ -0,0 +1,7 @@ +package main + +import "github.com/eloualiche/esync/cmd" + +func main() { + cmd.Execute() +} diff --git a/pyproject.toml b/pyproject.toml @@ -1,20 +0,0 @@ -[project] -name = "esync" -version = "0.1.0" -description = "watching and syncing folders" -readme = "README.md" -requires-python = ">=3.9" -dependencies = [ - "pydantic>=2.10.6", - "pytest>=8.3.4", - "pywatchman>=2.0.0", - "pyyaml>=6.0.2", - "rich>=13.9.4", - "tomli>=2.2.1", - "tomli-w>=1.2.0", - "typer>=0.15.1", - "watchdog>=6.0.0", -] - -[project.scripts] -esync = "esync.cli:app" diff --git a/readme.md b/readme.md @@ -1,45 +1,515 @@ # esync -A basic file sync tool based on watchdog/watchman and rsync. +A lightweight file synchronization tool that watches your local directory for changes and automatically syncs them to a local or remote destination using rsync. -Tested for rsync version 3.4.1 and python >=3.9. -For more information on rsync and options, visit the [manual page.](https://linux.die.net/man/1/rsync) +## Installation +Install with `go install`: -## Installation +```bash +go install github.com/eloualiche/esync@latest +``` + +Or build from source: -With pip ```bash -git clone https://github.com/eloualic/esync.git +git clone https://github.com/eloualiche/esync.git cd esync -pip install -e . +go build -o esync . +``` + +## Quick Start + +```bash +# 1. Generate a config file (imports .gitignore, detects common dirs) +esync init -r user@host:/path/to/dest + +# 2. Preview what will be synced +esync check + +# 3. Start watching and syncing +esync sync +``` + +## Commands Reference + +### `esync sync` + +Watch a local directory for changes and sync them to a destination using rsync. Launches an interactive TUI by default. + +```bash +esync sync # use config file, launch TUI +esync sync -c project.toml # use a specific config file +esync sync -l ./src -r server:/opt # quick mode, no config file needed +esync sync --daemon # run in background (no TUI) +esync sync --dry-run # show what would sync, don't transfer +esync sync --initial-sync # force a full sync on startup +esync sync -v # verbose output (daemon mode) +``` + +| Flag | Short | Description | +|-------------------|-------|------------------------------------------| +| `--local` | `-l` | Local path to watch | +| `--remote` | `-r` | Remote destination path | +| `--daemon` | | Run in daemon mode (no TUI) | +| `--dry-run` | | Show what would be synced without syncing | +| `--initial-sync` | | Force a full sync on startup | +| `--verbose` | `-v` | Verbose output | +| `--config` | `-c` | Config file path (global flag) | + +When both `-l` and `-r` are provided, esync runs without a config file (quick mode). Otherwise it searches for a config file automatically. + +### `esync init` + +Generate an `esync.toml` configuration file in the current directory. Inspects the project for `.gitignore` patterns and common directories (`.venv`, `build`, `__pycache__`, etc.) to auto-populate ignore rules. + +```bash +esync init # interactive prompt for remote +esync init -r user@host:/path # pre-fill the remote destination +esync init -c ~/.config/esync/config.toml -r server:/data # custom path ``` -## Usage +| Flag | Short | Description | +|------------|-------|------------------------------------| +| `--remote` | `-r` | Pre-fill remote destination | +| `--config` | `-c` | Output file path (default: `./esync.toml`) | -### Configuration +### `esync check` + +Validate your configuration and preview which files would be included or excluded by the ignore patterns. -To create a configuration file, run the following command: ```bash -esync init --help +esync check # auto-detect config +esync check -c project.toml # check a specific config file ``` -### Basic command +| Flag | Short | Description | +|------------|-------|------------------------------------| +| `--config` | `-c` | Config file path | + +### `esync edit` + +Open the config file in your `$EDITOR` (defaults to `vi`). After saving, the config is validated and a file preview is shown. If validation fails, you can re-edit or cancel. -Local sync ```bash -esync sync -l test-sync/source -r test-sync/target +esync edit # auto-detect config +esync edit -c project.toml # edit a specific config file ``` -Remote sync +| Flag | Short | Description | +|------------|-------|------------------------------------| +| `--config` | `-c` | Config file path | + +### `esync status` + +Check if an esync daemon is currently running. Reads the PID file from the system temp directory. + ```bash -esync sync -l test-sync/source -r user@remote:/path/to/target -esync sync -l test-sync/source -r remote:/path/to/target # or based on ssh config +esync status +# esync daemon running (PID 12345) +# — or — +# No esync daemon running. +``` + +## Configuration + +esync uses TOML configuration files. The config file is searched in this order: + +1. Path given via `-c` / `--config` flag +2. `./esync.toml` (current directory) +3. `~/.config/esync/config.toml` +4. `/etc/esync/config.toml` + +### Full Annotated Example + +This shows every available field with explanatory comments: + +```toml +# ============================================================================= +# esync configuration file +# ============================================================================= + +[sync] +# Local directory to watch for changes (required) +local = "/home/user/projects/myapp" + +# Remote destination — can be a local path or an scp-style remote (required) +# Examples: +# "/backup/myapp" — local path +# "server:/opt/myapp" — remote using SSH config alias +# "user@192.168.1.50:/opt/myapp" — remote with explicit user +remote = "deploy@prod.example.com:/var/www/myapp" + +# Polling interval in seconds (default: 1) +# This is used internally; the watcher reacts to filesystem events, +# so you rarely need to change this. +interval = 1 + +# --- SSH Configuration (optional) --- +# Use this section for fine-grained SSH control. +# If omitted, esync infers SSH from the remote string (e.g. user@host:/path). +[sync.ssh] +host = "prod.example.com" +user = "deploy" +port = 22 +identity_file = "~/.ssh/id_ed25519" +interactive_auth = false # set to true for 2FA / keyboard-interactive auth + +# ============================================================================= +[settings] + +# Debounce interval in milliseconds (default: 500) +# After a file change, esync waits this long for more changes before syncing. +# Lower = more responsive, higher = fewer rsync invocations during rapid edits. +watcher_debounce = 500 + +# Run a full sync when esync starts (default: false) +initial_sync = false + +# Patterns to ignore — applied to both the watcher and rsync --exclude flags. +# Supports glob patterns. Matched against file/directory base names. +ignore = [ + ".git", + "node_modules", + ".DS_Store", + "__pycache__", + "*.pyc", + ".venv", + "build", + "dist", + ".tox", + ".mypy_cache", +] + +# --- Rsync Settings --- +[settings.rsync] +archive = true # rsync --archive (preserves symlinks, permissions, timestamps) +compress = true # rsync --compress (compress data during transfer) +backup = false # rsync --backup (make backups of replaced files) +backup_dir = ".rsync_backup" # directory for backup files when backup = true +progress = true # rsync --progress (show transfer progress) + +# Extra arguments passed directly to rsync. +# Useful for flags esync doesn't expose directly. +extra_args = [] + +# Additional rsync-specific ignore patterns (merged with settings.ignore). +ignore = [] + +# --- Logging --- +[settings.log] +# Log file path. If omitted, no log file is written. +# Logs are only written in daemon mode. +# file = "/var/log/esync.log" + +# Log format: "text" or "json" (default: "text") +format = "text" +``` + +### Minimal Config + +The smallest usable config file: + +```toml +[sync] +local = "." +remote = "user@host:/path/to/dest" +``` + +Everything else uses sensible defaults: archive mode, compression, 500ms debounce, and standard ignore patterns (`.git`, `node_modules`, `.DS_Store`). + +### SSH Config Example + +For remote servers with a specific SSH key and non-standard port: + +```toml +[sync] +local = "." +remote = "/var/www/myapp" + +[sync.ssh] +host = "myserver.example.com" +user = "deploy" +port = 2222 +identity_file = "~/.ssh/deploy_key" +``` + +When `[sync.ssh]` is present, esync constructs the full destination as `deploy@myserver.example.com:/var/www/myapp` and passes SSH options (port, identity file, ControlMaster) to rsync automatically. + +### 2FA / Keyboard-Interactive Authentication + +If your server requires two-factor authentication: + +```toml +[sync] +local = "." +remote = "/home/user/project" + +[sync.ssh] +host = "secure-server.example.com" +user = "admin" +identity_file = "~/.ssh/id_ed25519" +interactive_auth = true ``` +### Custom Rsync Flags + +Pass extra arguments directly to rsync using `extra_args`: + +```toml +[sync] +local = "./src" +remote = "server:/opt/app/src" + +[settings.rsync] +archive = true +compress = true +extra_args = [ + "--delete", # delete files on remote that don't exist locally + "--chmod=Du=rwx,Dgo=rx,Fu=rw,Fgo=r", # set permissions on remote + "--exclude-from=.rsyncignore", # additional exclude file + "--bwlimit=5000", # bandwidth limit in KBytes/sec +] +``` + +### Separate Watcher and Rsync Ignore Patterns + +The top-level `settings.ignore` patterns are used by both the file watcher and rsync. If you need rsync-specific excludes (patterns the watcher should still see), use `settings.rsync.ignore`: + +```toml +[settings] +# These patterns are used by BOTH the watcher and rsync +ignore = [".git", "node_modules", ".DS_Store"] + +[settings.rsync] +# These patterns are ONLY passed to rsync as --exclude flags +ignore = ["*.log", "*.tmp", "cache/"] +``` + +### Logging Config + +```toml +[settings.log] +file = "/var/log/esync.log" +format = "json" +``` + +Text format output: + +``` +15:04:05 INF started local=/home/user/project pid=12345 remote=server:/opt/app +15:04:07 INF sync_complete bytes=2048 duration=150ms files=3 +15:04:12 ERR sync_failed error=rsync error: ... +``` + +JSON format output: + +```json +{"time":"15:04:05","level":"info","event":"started","local":"/home/user/project","remote":"server:/opt/app","pid":12345} +{"time":"15:04:07","level":"info","event":"sync_complete","files":3,"bytes":2048,"duration":"150ms"} +``` + +## TUI Keyboard Shortcuts + +The interactive TUI (default mode) provides two views: Dashboard and Logs. + +### Dashboard View + +| Key | Action | +|-----------|--------------------------------| +| `q` | Quit | +| `Ctrl+C` | Quit | +| `p` | Pause / resume watching | +| `l` | Switch to log view | +| `/` | Enter filter mode | +| `Enter` | Apply filter (in filter mode) | +| `Esc` | Clear filter (in filter mode) | + +### Log View + +| Key | Action | +|-----------|--------------------------------| +| `q` | Quit | +| `Ctrl+C` | Quit | +| `l` | Switch back to dashboard | +| `j` / `Down` | Scroll down | +| `k` / `Up` | Scroll up | +| `/` | Enter filter mode | +| `Enter` | Apply filter (in filter mode) | +| `Esc` | Clear filter (in filter mode) | + +## Daemon Mode + +Run esync in the background without the TUI: + +```bash +# Start daemon +esync sync --daemon + +# Start daemon with verbose output and JSON logging +esync sync --daemon -v -c project.toml + +# Check if the daemon is running +esync status + +# Stop the daemon +kill $(cat /tmp/esync.pid) +``` + +The daemon writes its PID to `/tmp/esync.pid` so you can check status and stop it later. On receiving `SIGINT` or `SIGTERM` the daemon shuts down gracefully. + +When a log file is configured, the daemon writes structured entries for every sync event: + +```bash +# Monitor logs in real-time +tail -f /var/log/esync.log +``` + +## SSH Setup + +esync uses rsync's SSH transport for remote syncing. There are two ways to configure SSH. + +### Inline (via remote string) + +If your `~/.ssh/config` is already set up, just use the host alias: + +```toml +[sync] +local = "." +remote = "myserver:/opt/app" +``` + +This works when `myserver` is defined in `~/.ssh/config`: + +``` +Host myserver + HostName 192.168.1.50 + User deploy + IdentityFile ~/.ssh/id_ed25519 +``` + +### Explicit SSH Section + +For full control without relying on `~/.ssh/config`: + +```toml +[sync.ssh] +host = "192.168.1.50" +user = "deploy" +port = 22 +identity_file = "~/.ssh/id_ed25519" +``` + +When the `[sync.ssh]` section is present, esync automatically enables SSH ControlMaster with these options: + +- `ControlMaster=auto` -- reuse existing SSH connections +- `ControlPath=/tmp/esync-ssh-%r@%h:%p` -- socket path for multiplexing +- `ControlPersist=600` -- keep the connection alive for 10 minutes + +This avoids re-authenticating on every sync and significantly speeds up repeated transfers. + +### 2FA Authentication + +Set `interactive_auth = true` in the SSH config to enable keyboard-interactive authentication for servers that require a second factor: + +```toml +[sync.ssh] +host = "secure.example.com" +user = "admin" +identity_file = "~/.ssh/id_ed25519" +interactive_auth = true +``` + +## Examples + +### Local directory sync + +Sync a source directory to a local backup: + +```bash +esync sync -l ./src -r /backup/src +``` + +### Remote sync with SSH + +Sync to a remote server using a config file: + +```toml +# esync.toml +[sync] +local = "." +remote = "deploy@prod.example.com:/var/www/mysite" + +[settings] +ignore = [".git", "node_modules", ".DS_Store", ".env"] +``` + +```bash +esync sync +``` + +### Quick sync (no config file) + +Sync without a config file by passing both paths on the command line: + +```bash +esync sync -l ./project -r user@server:/opt/project +``` + +This uses sensible defaults: archive mode, compression, 500ms debounce, and ignores `.git`, `node_modules`, `.DS_Store`. + +### Daemon mode with JSON logs + +Run in the background with structured logging: + +```toml +# esync.toml +[sync] +local = "/home/user/code" +remote = "server:/opt/code" + +[settings] +initial_sync = true + +[settings.log] +file = "/var/log/esync.log" +format = "json" +``` + +```bash +esync sync --daemon -v +# esync daemon started (PID 54321) +# Watching: /home/user/code -> server:/opt/code +``` + +### Custom rsync flags (delete extraneous files) + +Keep the remote directory in exact sync by deleting files that no longer exist locally: + +```toml +# esync.toml +[sync] +local = "./dist" +remote = "cdn-server:/var/www/static" + +[settings.rsync] +extra_args = ["--delete", "--chmod=Fu=rw,Fgo=r,Du=rwx,Dgo=rx"] +``` + +```bash +esync sync --initial-sync +``` + +### Dry run to preview changes + +See what rsync would do without actually transferring anything: + +```bash +esync sync --dry-run +``` -## Future +## System Requirements -- Two step authentication -- Statistics -- General option for rsync +- **Go** 1.22+ (for building from source) +- **rsync** 3.x +- **macOS** or **Linux** (uses fsnotify for filesystem events) diff --git a/uv.lock b/uv.lock @@ -1,452 +0,0 @@ -version = 1 -requires-python = ">=3.9" - -[[package]] -name = "annotated-types" -version = "0.7.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643 }, -] - -[[package]] -name = "click" -version = "8.1.8" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "colorama", marker = "sys_platform == 'win32'" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/7e/d4/7ebdbd03970677812aac39c869717059dbb71a4cfc033ca6e5221787892c/click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2", size = 98188 }, -] - -[[package]] -name = "colorama" -version = "0.4.6" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 }, -] - -[[package]] -name = "esync" -version = "0.1.0" -source = { virtual = "." } -dependencies = [ - { name = "pydantic" }, - { name = "pytest" }, - { name = "pywatchman" }, - { name = "pyyaml" }, - { name = "rich" }, - { name = "tomli" }, - { name = "tomli-w" }, - { name = "typer" }, - { name = "watchdog" }, -] - -[package.metadata] -requires-dist = [ - { name = "pydantic", specifier = ">=2.10.6" }, - { name = "pytest", specifier = ">=8.3.4" }, - { name = "pywatchman", specifier = ">=2.0.0" }, - { name = "pyyaml", specifier = ">=6.0.2" }, - { name = "rich", specifier = ">=13.9.4" }, - { name = "tomli", specifier = ">=2.2.1" }, - { name = "tomli-w", specifier = ">=1.2.0" }, - { name = "typer", specifier = ">=0.15.1" }, - { name = "watchdog", specifier = ">=6.0.0" }, -] - -[[package]] -name = "exceptiongroup" -version = "1.2.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/09/35/2495c4ac46b980e4ca1f6ad6db102322ef3ad2410b79fdde159a4b0f3b92/exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc", size = 28883 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/02/cc/b7e31358aac6ed1ef2bb790a9746ac2c69bcb3c8588b41616914eb106eaf/exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b", size = 16453 }, -] - -[[package]] -name = "iniconfig" -version = "2.0.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d7/4b/cbd8e699e64a6f16ca3a8220661b5f83792b3017d0f79807cb8708d33913/iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3", size = 4646 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/ef/a6/62565a6e1cf69e10f5727360368e451d4b7f58beeac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374", size = 5892 }, -] - -[[package]] -name = "markdown-it-py" -version = "3.0.0" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "mdurl" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/38/71/3b932df36c1a044d397a1f92d1cf91ee0a503d91e470cbd670aa66b07ed0/markdown-it-py-3.0.0.tar.gz", hash = "sha256:e3f60a94fa066dc52ec76661e37c851cb232d92f9886b15cb560aaada2df8feb", size = 74596 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/42/d7/1ec15b46af6af88f19b8e5ffea08fa375d433c998b8a7639e76935c14f1f/markdown_it_py-3.0.0-py3-none-any.whl", hash = "sha256:355216845c60bd96232cd8d8c40e8f9765cc86f46880e43a8fd22dc1a1a8cab1", size = 87528 }, -] - -[[package]] -name = "mdurl" -version = "0.1.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d6/54/cfe61301667036ec958cb99bd3efefba235e65cdeb9c84d24a8293ba1d90/mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba", size = 8729 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979 }, -] - -[[package]] -name = "packaging" -version = "24.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d0/63/68dbb6eb2de9cb10ee4c9c14a0148804425e13c4fb20d61cce69f53106da/packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f", size = 163950 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759", size = 65451 }, -] - -[[package]] -name = "pluggy" -version = "1.5.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/96/2d/02d4312c973c6050a18b314a5ad0b3210edb65a906f868e31c111dede4a6/pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1", size = 67955 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/88/5f/e351af9a41f866ac3f1fac4ca0613908d9a41741cfcf2228f4ad853b697d/pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669", size = 20556 }, -] - -[[package]] -name = "pydantic" -version = "2.10.6" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "annotated-types" }, - { name = "pydantic-core" }, - { name = "typing-extensions" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/b7/ae/d5220c5c52b158b1de7ca89fc5edb72f304a70a4c540c84c8844bf4008de/pydantic-2.10.6.tar.gz", hash = "sha256:ca5daa827cce33de7a42be142548b0096bf05a7e7b365aebfa5f8eeec7128236", size = 761681 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/f4/3c/8cc1cc84deffa6e25d2d0c688ebb80635dfdbf1dbea3e30c541c8cf4d860/pydantic-2.10.6-py3-none-any.whl", hash = "sha256:427d664bf0b8a2b34ff5dd0f5a18df00591adcee7198fbd71981054cef37b584", size = 431696 }, -] - -[[package]] -name = "pydantic-core" -version = "2.27.2" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "typing-extensions" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/fc/01/f3e5ac5e7c25833db5eb555f7b7ab24cd6f8c322d3a3ad2d67a952dc0abc/pydantic_core-2.27.2.tar.gz", hash = "sha256:eb026e5a4c1fee05726072337ff51d1efb6f59090b7da90d30ea58625b1ffb39", size = 413443 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/3a/bc/fed5f74b5d802cf9a03e83f60f18864e90e3aed7223adaca5ffb7a8d8d64/pydantic_core-2.27.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2d367ca20b2f14095a8f4fa1210f5a7b78b8a20009ecced6b12818f455b1e9fa", size = 1895938 }, - { url = "https://files.pythonhosted.org/packages/71/2a/185aff24ce844e39abb8dd680f4e959f0006944f4a8a0ea372d9f9ae2e53/pydantic_core-2.27.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:491a2b73db93fab69731eaee494f320faa4e093dbed776be1a829c2eb222c34c", size = 1815684 }, - { url = "https://files.pythonhosted.org/packages/c3/43/fafabd3d94d159d4f1ed62e383e264f146a17dd4d48453319fd782e7979e/pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7969e133a6f183be60e9f6f56bfae753585680f3b7307a8e555a948d443cc05a", size = 1829169 }, - { url = "https://files.pythonhosted.org/packages/a2/d1/f2dfe1a2a637ce6800b799aa086d079998959f6f1215eb4497966efd2274/pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3de9961f2a346257caf0aa508a4da705467f53778e9ef6fe744c038119737ef5", size = 1867227 }, - { url = "https://files.pythonhosted.org/packages/7d/39/e06fcbcc1c785daa3160ccf6c1c38fea31f5754b756e34b65f74e99780b5/pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e2bb4d3e5873c37bb3dd58714d4cd0b0e6238cebc4177ac8fe878f8b3aa8e74c", size = 2037695 }, - { url = "https://files.pythonhosted.org/packages/7a/67/61291ee98e07f0650eb756d44998214231f50751ba7e13f4f325d95249ab/pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:280d219beebb0752699480fe8f1dc61ab6615c2046d76b7ab7ee38858de0a4e7", size = 2741662 }, - { url = "https://files.pythonhosted.org/packages/32/90/3b15e31b88ca39e9e626630b4c4a1f5a0dfd09076366f4219429e6786076/pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47956ae78b6422cbd46f772f1746799cbb862de838fd8d1fbd34a82e05b0983a", size = 1993370 }, - { url = "https://files.pythonhosted.org/packages/ff/83/c06d333ee3a67e2e13e07794995c1535565132940715931c1c43bfc85b11/pydantic_core-2.27.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:14d4a5c49d2f009d62a2a7140d3064f686d17a5d1a268bc641954ba181880236", size = 1996813 }, - { url = "https://files.pythonhosted.org/packages/7c/f7/89be1c8deb6e22618a74f0ca0d933fdcb8baa254753b26b25ad3acff8f74/pydantic_core-2.27.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:337b443af21d488716f8d0b6164de833e788aa6bd7e3a39c005febc1284f4962", size = 2005287 }, - { url = "https://files.pythonhosted.org/packages/b7/7d/8eb3e23206c00ef7feee17b83a4ffa0a623eb1a9d382e56e4aa46fd15ff2/pydantic_core-2.27.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:03d0f86ea3184a12f41a2d23f7ccb79cdb5a18e06993f8a45baa8dfec746f0e9", size = 2128414 }, - { url = "https://files.pythonhosted.org/packages/4e/99/fe80f3ff8dd71a3ea15763878d464476e6cb0a2db95ff1c5c554133b6b83/pydantic_core-2.27.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7041c36f5680c6e0f08d922aed302e98b3745d97fe1589db0a3eebf6624523af", size = 2155301 }, - { url = "https://files.pythonhosted.org/packages/2b/a3/e50460b9a5789ca1451b70d4f52546fa9e2b420ba3bfa6100105c0559238/pydantic_core-2.27.2-cp310-cp310-win32.whl", hash = "sha256:50a68f3e3819077be2c98110c1f9dcb3817e93f267ba80a2c05bb4f8799e2ff4", size = 1816685 }, - { url = "https://files.pythonhosted.org/packages/57/4c/a8838731cb0f2c2a39d3535376466de6049034d7b239c0202a64aaa05533/pydantic_core-2.27.2-cp310-cp310-win_amd64.whl", hash = "sha256:e0fd26b16394ead34a424eecf8a31a1f5137094cabe84a1bcb10fa6ba39d3d31", size = 1982876 }, - { url = "https://files.pythonhosted.org/packages/c2/89/f3450af9d09d44eea1f2c369f49e8f181d742f28220f88cc4dfaae91ea6e/pydantic_core-2.27.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:8e10c99ef58cfdf2a66fc15d66b16c4a04f62bca39db589ae8cba08bc55331bc", size = 1893421 }, - { url = "https://files.pythonhosted.org/packages/9e/e3/71fe85af2021f3f386da42d291412e5baf6ce7716bd7101ea49c810eda90/pydantic_core-2.27.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:26f32e0adf166a84d0cb63be85c562ca8a6fa8de28e5f0d92250c6b7e9e2aff7", size = 1814998 }, - { url = "https://files.pythonhosted.org/packages/a6/3c/724039e0d848fd69dbf5806894e26479577316c6f0f112bacaf67aa889ac/pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8c19d1ea0673cd13cc2f872f6c9ab42acc4e4f492a7ca9d3795ce2b112dd7e15", size = 1826167 }, - { url = "https://files.pythonhosted.org/packages/2b/5b/1b29e8c1fb5f3199a9a57c1452004ff39f494bbe9bdbe9a81e18172e40d3/pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5e68c4446fe0810e959cdff46ab0a41ce2f2c86d227d96dc3847af0ba7def306", size = 1865071 }, - { url = "https://files.pythonhosted.org/packages/89/6c/3985203863d76bb7d7266e36970d7e3b6385148c18a68cc8915fd8c84d57/pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d9640b0059ff4f14d1f37321b94061c6db164fbe49b334b31643e0528d100d99", size = 2036244 }, - { url = "https://files.pythonhosted.org/packages/0e/41/f15316858a246b5d723f7d7f599f79e37493b2e84bfc789e58d88c209f8a/pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:40d02e7d45c9f8af700f3452f329ead92da4c5f4317ca9b896de7ce7199ea459", size = 2737470 }, - { url = "https://files.pythonhosted.org/packages/a8/7c/b860618c25678bbd6d1d99dbdfdf0510ccb50790099b963ff78a124b754f/pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c1fd185014191700554795c99b347d64f2bb637966c4cfc16998a0ca700d048", size = 1992291 }, - { url = "https://files.pythonhosted.org/packages/bf/73/42c3742a391eccbeab39f15213ecda3104ae8682ba3c0c28069fbcb8c10d/pydantic_core-2.27.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d81d2068e1c1228a565af076598f9e7451712700b673de8f502f0334f281387d", size = 1994613 }, - { url = "https://files.pythonhosted.org/packages/94/7a/941e89096d1175d56f59340f3a8ebaf20762fef222c298ea96d36a6328c5/pydantic_core-2.27.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a4207639fb02ec2dbb76227d7c751a20b1a6b4bc52850568e52260cae64ca3b", size = 2002355 }, - { url = "https://files.pythonhosted.org/packages/6e/95/2359937a73d49e336a5a19848713555605d4d8d6940c3ec6c6c0ca4dcf25/pydantic_core-2.27.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:3de3ce3c9ddc8bbd88f6e0e304dea0e66d843ec9de1b0042b0911c1663ffd474", size = 2126661 }, - { url = "https://files.pythonhosted.org/packages/2b/4c/ca02b7bdb6012a1adef21a50625b14f43ed4d11f1fc237f9d7490aa5078c/pydantic_core-2.27.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:30c5f68ded0c36466acede341551106821043e9afaad516adfb6e8fa80a4e6a6", size = 2153261 }, - { url = "https://files.pythonhosted.org/packages/72/9d/a241db83f973049a1092a079272ffe2e3e82e98561ef6214ab53fe53b1c7/pydantic_core-2.27.2-cp311-cp311-win32.whl", hash = "sha256:c70c26d2c99f78b125a3459f8afe1aed4d9687c24fd677c6a4436bc042e50d6c", size = 1812361 }, - { url = "https://files.pythonhosted.org/packages/e8/ef/013f07248041b74abd48a385e2110aa3a9bbfef0fbd97d4e6d07d2f5b89a/pydantic_core-2.27.2-cp311-cp311-win_amd64.whl", hash = "sha256:08e125dbdc505fa69ca7d9c499639ab6407cfa909214d500897d02afb816e7cc", size = 1982484 }, - { url = "https://files.pythonhosted.org/packages/10/1c/16b3a3e3398fd29dca77cea0a1d998d6bde3902fa2706985191e2313cc76/pydantic_core-2.27.2-cp311-cp311-win_arm64.whl", hash = "sha256:26f0d68d4b235a2bae0c3fc585c585b4ecc51382db0e3ba402a22cbc440915e4", size = 1867102 }, - { url = "https://files.pythonhosted.org/packages/d6/74/51c8a5482ca447871c93e142d9d4a92ead74de6c8dc5e66733e22c9bba89/pydantic_core-2.27.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:9e0c8cfefa0ef83b4da9588448b6d8d2a2bf1a53c3f1ae5fca39eb3061e2f0b0", size = 1893127 }, - { url = "https://files.pythonhosted.org/packages/d3/f3/c97e80721735868313c58b89d2de85fa80fe8dfeeed84dc51598b92a135e/pydantic_core-2.27.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:83097677b8e3bd7eaa6775720ec8e0405f1575015a463285a92bfdfe254529ef", size = 1811340 }, - { url = "https://files.pythonhosted.org/packages/9e/91/840ec1375e686dbae1bd80a9e46c26a1e0083e1186abc610efa3d9a36180/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:172fce187655fece0c90d90a678424b013f8fbb0ca8b036ac266749c09438cb7", size = 1822900 }, - { url = "https://files.pythonhosted.org/packages/f6/31/4240bc96025035500c18adc149aa6ffdf1a0062a4b525c932065ceb4d868/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:519f29f5213271eeeeb3093f662ba2fd512b91c5f188f3bb7b27bc5973816934", size = 1869177 }, - { url = "https://files.pythonhosted.org/packages/fa/20/02fbaadb7808be578317015c462655c317a77a7c8f0ef274bc016a784c54/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:05e3a55d124407fffba0dd6b0c0cd056d10e983ceb4e5dbd10dda135c31071d6", size = 2038046 }, - { url = "https://files.pythonhosted.org/packages/06/86/7f306b904e6c9eccf0668248b3f272090e49c275bc488a7b88b0823444a4/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9c3ed807c7b91de05e63930188f19e921d1fe90de6b4f5cd43ee7fcc3525cb8c", size = 2685386 }, - { url = "https://files.pythonhosted.org/packages/8d/f0/49129b27c43396581a635d8710dae54a791b17dfc50c70164866bbf865e3/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6fb4aadc0b9a0c063206846d603b92030eb6f03069151a625667f982887153e2", size = 1997060 }, - { url = "https://files.pythonhosted.org/packages/0d/0f/943b4af7cd416c477fd40b187036c4f89b416a33d3cc0ab7b82708a667aa/pydantic_core-2.27.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:28ccb213807e037460326424ceb8b5245acb88f32f3d2777427476e1b32c48c4", size = 2004870 }, - { url = "https://files.pythonhosted.org/packages/35/40/aea70b5b1a63911c53a4c8117c0a828d6790483f858041f47bab0b779f44/pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:de3cd1899e2c279b140adde9357c4495ed9d47131b4a4eaff9052f23398076b3", size = 1999822 }, - { url = "https://files.pythonhosted.org/packages/f2/b3/807b94fd337d58effc5498fd1a7a4d9d59af4133e83e32ae39a96fddec9d/pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:220f892729375e2d736b97d0e51466252ad84c51857d4d15f5e9692f9ef12be4", size = 2130364 }, - { url = "https://files.pythonhosted.org/packages/fc/df/791c827cd4ee6efd59248dca9369fb35e80a9484462c33c6649a8d02b565/pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a0fcd29cd6b4e74fe8ddd2c90330fd8edf2e30cb52acda47f06dd615ae72da57", size = 2158303 }, - { url = "https://files.pythonhosted.org/packages/9b/67/4e197c300976af185b7cef4c02203e175fb127e414125916bf1128b639a9/pydantic_core-2.27.2-cp312-cp312-win32.whl", hash = "sha256:1e2cb691ed9834cd6a8be61228471d0a503731abfb42f82458ff27be7b2186fc", size = 1834064 }, - { url = "https://files.pythonhosted.org/packages/1f/ea/cd7209a889163b8dcca139fe32b9687dd05249161a3edda62860430457a5/pydantic_core-2.27.2-cp312-cp312-win_amd64.whl", hash = "sha256:cc3f1a99a4f4f9dd1de4fe0312c114e740b5ddead65bb4102884b384c15d8bc9", size = 1989046 }, - { url = "https://files.pythonhosted.org/packages/bc/49/c54baab2f4658c26ac633d798dab66b4c3a9bbf47cff5284e9c182f4137a/pydantic_core-2.27.2-cp312-cp312-win_arm64.whl", hash = "sha256:3911ac9284cd8a1792d3cb26a2da18f3ca26c6908cc434a18f730dc0db7bfa3b", size = 1885092 }, - { url = "https://files.pythonhosted.org/packages/41/b1/9bc383f48f8002f99104e3acff6cba1231b29ef76cfa45d1506a5cad1f84/pydantic_core-2.27.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:7d14bd329640e63852364c306f4d23eb744e0f8193148d4044dd3dacdaacbd8b", size = 1892709 }, - { url = "https://files.pythonhosted.org/packages/10/6c/e62b8657b834f3eb2961b49ec8e301eb99946245e70bf42c8817350cbefc/pydantic_core-2.27.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:82f91663004eb8ed30ff478d77c4d1179b3563df6cdb15c0817cd1cdaf34d154", size = 1811273 }, - { url = "https://files.pythonhosted.org/packages/ba/15/52cfe49c8c986e081b863b102d6b859d9defc63446b642ccbbb3742bf371/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:71b24c7d61131bb83df10cc7e687433609963a944ccf45190cfc21e0887b08c9", size = 1823027 }, - { url = "https://files.pythonhosted.org/packages/b1/1c/b6f402cfc18ec0024120602bdbcebc7bdd5b856528c013bd4d13865ca473/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fa8e459d4954f608fa26116118bb67f56b93b209c39b008277ace29937453dc9", size = 1868888 }, - { url = "https://files.pythonhosted.org/packages/bd/7b/8cb75b66ac37bc2975a3b7de99f3c6f355fcc4d89820b61dffa8f1e81677/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce8918cbebc8da707ba805b7fd0b382816858728ae7fe19a942080c24e5b7cd1", size = 2037738 }, - { url = "https://files.pythonhosted.org/packages/c8/f1/786d8fe78970a06f61df22cba58e365ce304bf9b9f46cc71c8c424e0c334/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:eda3f5c2a021bbc5d976107bb302e0131351c2ba54343f8a496dc8783d3d3a6a", size = 2685138 }, - { url = "https://files.pythonhosted.org/packages/a6/74/d12b2cd841d8724dc8ffb13fc5cef86566a53ed358103150209ecd5d1999/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd8086fa684c4775c27f03f062cbb9eaa6e17f064307e86b21b9e0abc9c0f02e", size = 1997025 }, - { url = "https://files.pythonhosted.org/packages/a0/6e/940bcd631bc4d9a06c9539b51f070b66e8f370ed0933f392db6ff350d873/pydantic_core-2.27.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8d9b3388db186ba0c099a6d20f0604a44eabdeef1777ddd94786cdae158729e4", size = 2004633 }, - { url = "https://files.pythonhosted.org/packages/50/cc/a46b34f1708d82498c227d5d80ce615b2dd502ddcfd8376fc14a36655af1/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7a66efda2387de898c8f38c0cf7f14fca0b51a8ef0b24bfea5849f1b3c95af27", size = 1999404 }, - { url = "https://files.pythonhosted.org/packages/ca/2d/c365cfa930ed23bc58c41463bae347d1005537dc8db79e998af8ba28d35e/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:18a101c168e4e092ab40dbc2503bdc0f62010e95d292b27827871dc85450d7ee", size = 2130130 }, - { url = "https://files.pythonhosted.org/packages/f4/d7/eb64d015c350b7cdb371145b54d96c919d4db516817f31cd1c650cae3b21/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ba5dd002f88b78a4215ed2f8ddbdf85e8513382820ba15ad5ad8955ce0ca19a1", size = 2157946 }, - { url = "https://files.pythonhosted.org/packages/a4/99/bddde3ddde76c03b65dfd5a66ab436c4e58ffc42927d4ff1198ffbf96f5f/pydantic_core-2.27.2-cp313-cp313-win32.whl", hash = "sha256:1ebaf1d0481914d004a573394f4be3a7616334be70261007e47c2a6fe7e50130", size = 1834387 }, - { url = "https://files.pythonhosted.org/packages/71/47/82b5e846e01b26ac6f1893d3c5f9f3a2eb6ba79be26eef0b759b4fe72946/pydantic_core-2.27.2-cp313-cp313-win_amd64.whl", hash = "sha256:953101387ecf2f5652883208769a79e48db18c6df442568a0b5ccd8c2723abee", size = 1990453 }, - { url = "https://files.pythonhosted.org/packages/51/b2/b2b50d5ecf21acf870190ae5d093602d95f66c9c31f9d5de6062eb329ad1/pydantic_core-2.27.2-cp313-cp313-win_arm64.whl", hash = "sha256:ac4dbfd1691affb8f48c2c13241a2e3b60ff23247cbcf981759c768b6633cf8b", size = 1885186 }, - { url = "https://files.pythonhosted.org/packages/27/97/3aef1ddb65c5ccd6eda9050036c956ff6ecbfe66cb7eb40f280f121a5bb0/pydantic_core-2.27.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:c10eb4f1659290b523af58fa7cffb452a61ad6ae5613404519aee4bfbf1df993", size = 1896475 }, - { url = "https://files.pythonhosted.org/packages/ad/d3/5668da70e373c9904ed2f372cb52c0b996426f302e0dee2e65634c92007d/pydantic_core-2.27.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef592d4bad47296fb11f96cd7dc898b92e795032b4894dfb4076cfccd43a9308", size = 1772279 }, - { url = "https://files.pythonhosted.org/packages/8a/9e/e44b8cb0edf04a2f0a1f6425a65ee089c1d6f9c4c2dcab0209127b6fdfc2/pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c61709a844acc6bf0b7dce7daae75195a10aac96a596ea1b776996414791ede4", size = 1829112 }, - { url = "https://files.pythonhosted.org/packages/1c/90/1160d7ac700102effe11616e8119e268770f2a2aa5afb935f3ee6832987d/pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:42c5f762659e47fdb7b16956c71598292f60a03aa92f8b6351504359dbdba6cf", size = 1866780 }, - { url = "https://files.pythonhosted.org/packages/ee/33/13983426df09a36d22c15980008f8d9c77674fc319351813b5a2739b70f3/pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4c9775e339e42e79ec99c441d9730fccf07414af63eac2f0e48e08fd38a64d76", size = 2037943 }, - { url = "https://files.pythonhosted.org/packages/01/d7/ced164e376f6747e9158c89988c293cd524ab8d215ae4e185e9929655d5c/pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:57762139821c31847cfb2df63c12f725788bd9f04bc2fb392790959b8f70f118", size = 2740492 }, - { url = "https://files.pythonhosted.org/packages/8b/1f/3dc6e769d5b7461040778816aab2b00422427bcaa4b56cc89e9c653b2605/pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0d1e85068e818c73e048fe28cfc769040bb1f475524f4745a5dc621f75ac7630", size = 1995714 }, - { url = "https://files.pythonhosted.org/packages/07/d7/a0bd09bc39283530b3f7c27033a814ef254ba3bd0b5cfd040b7abf1fe5da/pydantic_core-2.27.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:097830ed52fd9e427942ff3b9bc17fab52913b2f50f2880dc4a5611446606a54", size = 1997163 }, - { url = "https://files.pythonhosted.org/packages/2d/bb/2db4ad1762e1c5699d9b857eeb41959191980de6feb054e70f93085e1bcd/pydantic_core-2.27.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:044a50963a614ecfae59bb1eaf7ea7efc4bc62f49ed594e18fa1e5d953c40e9f", size = 2005217 }, - { url = "https://files.pythonhosted.org/packages/53/5f/23a5a3e7b8403f8dd8fc8a6f8b49f6b55c7d715b77dcf1f8ae919eeb5628/pydantic_core-2.27.2-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:4e0b4220ba5b40d727c7f879eac379b822eee5d8fff418e9d3381ee45b3b0362", size = 2127899 }, - { url = "https://files.pythonhosted.org/packages/c2/ae/aa38bb8dd3d89c2f1d8362dd890ee8f3b967330821d03bbe08fa01ce3766/pydantic_core-2.27.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5e4f4bb20d75e9325cc9696c6802657b58bc1dbbe3022f32cc2b2b632c3fbb96", size = 2155726 }, - { url = "https://files.pythonhosted.org/packages/98/61/4f784608cc9e98f70839187117ce840480f768fed5d386f924074bf6213c/pydantic_core-2.27.2-cp39-cp39-win32.whl", hash = "sha256:cca63613e90d001b9f2f9a9ceb276c308bfa2a43fafb75c8031c4f66039e8c6e", size = 1817219 }, - { url = "https://files.pythonhosted.org/packages/57/82/bb16a68e4a1a858bb3768c2c8f1ff8d8978014e16598f001ea29a25bf1d1/pydantic_core-2.27.2-cp39-cp39-win_amd64.whl", hash = "sha256:77d1bca19b0f7021b3a982e6f903dcd5b2b06076def36a652e3907f596e29f67", size = 1985382 }, - { url = "https://files.pythonhosted.org/packages/46/72/af70981a341500419e67d5cb45abe552a7c74b66326ac8877588488da1ac/pydantic_core-2.27.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:2bf14caea37e91198329b828eae1618c068dfb8ef17bb33287a7ad4b61ac314e", size = 1891159 }, - { url = "https://files.pythonhosted.org/packages/ad/3d/c5913cccdef93e0a6a95c2d057d2c2cba347815c845cda79ddd3c0f5e17d/pydantic_core-2.27.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b0cb791f5b45307caae8810c2023a184c74605ec3bcbb67d13846c28ff731ff8", size = 1768331 }, - { url = "https://files.pythonhosted.org/packages/f6/f0/a3ae8fbee269e4934f14e2e0e00928f9346c5943174f2811193113e58252/pydantic_core-2.27.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:688d3fd9fcb71f41c4c015c023d12a79d1c4c0732ec9eb35d96e3388a120dcf3", size = 1822467 }, - { url = "https://files.pythonhosted.org/packages/d7/7a/7bbf241a04e9f9ea24cd5874354a83526d639b02674648af3f350554276c/pydantic_core-2.27.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d591580c34f4d731592f0e9fe40f9cc1b430d297eecc70b962e93c5c668f15f", size = 1979797 }, - { url = "https://files.pythonhosted.org/packages/4f/5f/4784c6107731f89e0005a92ecb8a2efeafdb55eb992b8e9d0a2be5199335/pydantic_core-2.27.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:82f986faf4e644ffc189a7f1aafc86e46ef70372bb153e7001e8afccc6e54133", size = 1987839 }, - { url = "https://files.pythonhosted.org/packages/6d/a7/61246562b651dff00de86a5f01b6e4befb518df314c54dec187a78d81c84/pydantic_core-2.27.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:bec317a27290e2537f922639cafd54990551725fc844249e64c523301d0822fc", size = 1998861 }, - { url = "https://files.pythonhosted.org/packages/86/aa/837821ecf0c022bbb74ca132e117c358321e72e7f9702d1b6a03758545e2/pydantic_core-2.27.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:0296abcb83a797db256b773f45773da397da75a08f5fcaef41f2044adec05f50", size = 2116582 }, - { url = "https://files.pythonhosted.org/packages/81/b0/5e74656e95623cbaa0a6278d16cf15e10a51f6002e3ec126541e95c29ea3/pydantic_core-2.27.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:0d75070718e369e452075a6017fbf187f788e17ed67a3abd47fa934d001863d9", size = 2151985 }, - { url = "https://files.pythonhosted.org/packages/63/37/3e32eeb2a451fddaa3898e2163746b0cffbbdbb4740d38372db0490d67f3/pydantic_core-2.27.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7e17b560be3c98a8e3aa66ce828bdebb9e9ac6ad5466fba92eb74c4c95cb1151", size = 2004715 }, - { url = "https://files.pythonhosted.org/packages/29/0e/dcaea00c9dbd0348b723cae82b0e0c122e0fa2b43fa933e1622fd237a3ee/pydantic_core-2.27.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c33939a82924da9ed65dab5a65d427205a73181d8098e79b6b426bdf8ad4e656", size = 1891733 }, - { url = "https://files.pythonhosted.org/packages/86/d3/e797bba8860ce650272bda6383a9d8cad1d1c9a75a640c9d0e848076f85e/pydantic_core-2.27.2-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:00bad2484fa6bda1e216e7345a798bd37c68fb2d97558edd584942aa41b7d278", size = 1768375 }, - { url = "https://files.pythonhosted.org/packages/41/f7/f847b15fb14978ca2b30262548f5fc4872b2724e90f116393eb69008299d/pydantic_core-2.27.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c817e2b40aba42bac6f457498dacabc568c3b7a986fc9ba7c8d9d260b71485fb", size = 1822307 }, - { url = "https://files.pythonhosted.org/packages/9c/63/ed80ec8255b587b2f108e514dc03eed1546cd00f0af281e699797f373f38/pydantic_core-2.27.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:251136cdad0cb722e93732cb45ca5299fb56e1344a833640bf93b2803f8d1bfd", size = 1979971 }, - { url = "https://files.pythonhosted.org/packages/a9/6d/6d18308a45454a0de0e975d70171cadaf454bc7a0bf86b9c7688e313f0bb/pydantic_core-2.27.2-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d2088237af596f0a524d3afc39ab3b036e8adb054ee57cbb1dcf8e09da5b29cc", size = 1987616 }, - { url = "https://files.pythonhosted.org/packages/82/8a/05f8780f2c1081b800a7ca54c1971e291c2d07d1a50fb23c7e4aef4ed403/pydantic_core-2.27.2-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:d4041c0b966a84b4ae7a09832eb691a35aec90910cd2dbe7a208de59be77965b", size = 1998943 }, - { url = "https://files.pythonhosted.org/packages/5e/3e/fe5b6613d9e4c0038434396b46c5303f5ade871166900b357ada4766c5b7/pydantic_core-2.27.2-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:8083d4e875ebe0b864ffef72a4304827015cff328a1be6e22cc850753bfb122b", size = 2116654 }, - { url = "https://files.pythonhosted.org/packages/db/ad/28869f58938fad8cc84739c4e592989730bfb69b7c90a8fff138dff18e1e/pydantic_core-2.27.2-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f141ee28a0ad2123b6611b6ceff018039df17f32ada8b534e6aa039545a3efb2", size = 2152292 }, - { url = "https://files.pythonhosted.org/packages/a1/0c/c5c5cd3689c32ed1fe8c5d234b079c12c281c051759770c05b8bed6412b5/pydantic_core-2.27.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:7d0c8399fcc1848491f00e0314bd59fb34a9c008761bcb422a057670c3f65e35", size = 2004961 }, -] - -[[package]] -name = "pygments" -version = "2.19.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/7c/2d/c3338d48ea6cc0feb8446d8e6937e1408088a72a39937982cc6111d17f84/pygments-2.19.1.tar.gz", hash = "sha256:61c16d2a8576dc0649d9f39e089b5f02bcd27fba10d8fb4dcc28173f7a45151f", size = 4968581 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/8a/0b/9fcc47d19c48b59121088dd6da2488a49d5f72dacf8262e2790a1d2c7d15/pygments-2.19.1-py3-none-any.whl", hash = "sha256:9ea1544ad55cecf4b8242fab6dd35a93bbce657034b0611ee383099054ab6d8c", size = 1225293 }, -] - -[[package]] -name = "pytest" -version = "8.3.4" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "colorama", marker = "sys_platform == 'win32'" }, - { name = "exceptiongroup", marker = "python_full_version < '3.11'" }, - { name = "iniconfig" }, - { name = "packaging" }, - { name = "pluggy" }, - { name = "tomli", marker = "python_full_version < '3.11'" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/05/35/30e0d83068951d90a01852cb1cef56e5d8a09d20c7f511634cc2f7e0372a/pytest-8.3.4.tar.gz", hash = "sha256:965370d062bce11e73868e0335abac31b4d3de0e82f4007408d242b4f8610761", size = 1445919 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/11/92/76a1c94d3afee238333bc0a42b82935dd8f9cf8ce9e336ff87ee14d9e1cf/pytest-8.3.4-py3-none-any.whl", hash = "sha256:50e16d954148559c9a74109af1eaf0c945ba2d8f30f0a3d3335edde19788b6f6", size = 343083 }, -] - -[[package]] -name = "pywatchman" -version = "2.0.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/cf/39/fc10dd952ac72a3a293936cd66a4551fdeb9012d2db99234a376100641ce/pywatchman-2.0.0.tar.gz", hash = "sha256:25354d9e3647f94411a4c13e510c83a1ceecc17977b0525ba41b16e7019c7b0c", size = 40570 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/d4/67/18a067aa83e25eebd5a4a391888d8c78c552d4037d58ae0fdb403b90b5e4/pywatchman-2.0.0-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:a8b531a9276c1dc8897510824cbb3290e4a5775793889758020593aedb97015b", size = 52498 }, - { url = "https://files.pythonhosted.org/packages/03/7d/2e57293d780253eb95de421b79c9bdd4fe5d7419e8b7a646630006838d5c/pywatchman-2.0.0-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:44217f0a38f40e64f975bd13eed91c73a076c0fb7e9e9908b96917389f53371e", size = 52499 }, - { url = "https://files.pythonhosted.org/packages/51/7f/5d68d803489770cffa5d2b44be99b978c866f8a4d8e835f9da850415ed8a/pywatchman-2.0.0-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:51c2b4c72bea6b9fd90caf20759f5bc47febf0fd27bf2f247b87c66e2f6bab02", size = 52557 }, - { url = "https://files.pythonhosted.org/packages/6e/14/ca3375aafe4f6a3ee3050f106633addddf09098165f2a5317e954d0450fa/pywatchman-2.0.0-cp39-cp39-macosx_14_0_arm64.whl", hash = "sha256:4183ea206be46cb45d48f59ef7540b849582798ccd51ce54e1e400237b7dbf43", size = 52502 }, -] - -[[package]] -name = "pyyaml" -version = "6.0.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/54/ed/79a089b6be93607fa5cdaedf301d7dfb23af5f25c398d5ead2525b063e17/pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e", size = 130631 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/9b/95/a3fac87cb7158e231b5a6012e438c647e1a87f09f8e0d123acec8ab8bf71/PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086", size = 184199 }, - { url = "https://files.pythonhosted.org/packages/c7/7a/68bd47624dab8fd4afbfd3c48e3b79efe09098ae941de5b58abcbadff5cb/PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf", size = 171758 }, - { url = "https://files.pythonhosted.org/packages/49/ee/14c54df452143b9ee9f0f29074d7ca5516a36edb0b4cc40c3f280131656f/PyYAML-6.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237", size = 718463 }, - { url = "https://files.pythonhosted.org/packages/4d/61/de363a97476e766574650d742205be468921a7b532aa2499fcd886b62530/PyYAML-6.0.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b", size = 719280 }, - { url = "https://files.pythonhosted.org/packages/6b/4e/1523cb902fd98355e2e9ea5e5eb237cbc5f3ad5f3075fa65087aa0ecb669/PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed", size = 751239 }, - { url = "https://files.pythonhosted.org/packages/b7/33/5504b3a9a4464893c32f118a9cc045190a91637b119a9c881da1cf6b7a72/PyYAML-6.0.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180", size = 695802 }, - { url = "https://files.pythonhosted.org/packages/5c/20/8347dcabd41ef3a3cdc4f7b7a2aff3d06598c8779faa189cdbf878b626a4/PyYAML-6.0.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68", size = 720527 }, - { url = "https://files.pythonhosted.org/packages/be/aa/5afe99233fb360d0ff37377145a949ae258aaab831bde4792b32650a4378/PyYAML-6.0.2-cp310-cp310-win32.whl", hash = "sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99", size = 144052 }, - { url = "https://files.pythonhosted.org/packages/b5/84/0fa4b06f6d6c958d207620fc60005e241ecedceee58931bb20138e1e5776/PyYAML-6.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e", size = 161774 }, - { url = "https://files.pythonhosted.org/packages/f8/aa/7af4e81f7acba21a4c6be026da38fd2b872ca46226673c89a758ebdc4fd2/PyYAML-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774", size = 184612 }, - { url = "https://files.pythonhosted.org/packages/8b/62/b9faa998fd185f65c1371643678e4d58254add437edb764a08c5a98fb986/PyYAML-6.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee", size = 172040 }, - { url = "https://files.pythonhosted.org/packages/ad/0c/c804f5f922a9a6563bab712d8dcc70251e8af811fce4524d57c2c0fd49a4/PyYAML-6.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c", size = 736829 }, - { url = "https://files.pythonhosted.org/packages/51/16/6af8d6a6b210c8e54f1406a6b9481febf9c64a3109c541567e35a49aa2e7/PyYAML-6.0.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317", size = 764167 }, - { url = "https://files.pythonhosted.org/packages/75/e4/2c27590dfc9992f73aabbeb9241ae20220bd9452df27483b6e56d3975cc5/PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85", size = 762952 }, - { url = "https://files.pythonhosted.org/packages/9b/97/ecc1abf4a823f5ac61941a9c00fe501b02ac3ab0e373c3857f7d4b83e2b6/PyYAML-6.0.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4", size = 735301 }, - { url = "https://files.pythonhosted.org/packages/45/73/0f49dacd6e82c9430e46f4a027baa4ca205e8b0a9dce1397f44edc23559d/PyYAML-6.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e", size = 756638 }, - { url = "https://files.pythonhosted.org/packages/22/5f/956f0f9fc65223a58fbc14459bf34b4cc48dec52e00535c79b8db361aabd/PyYAML-6.0.2-cp311-cp311-win32.whl", hash = "sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5", size = 143850 }, - { url = "https://files.pythonhosted.org/packages/ed/23/8da0bbe2ab9dcdd11f4f4557ccaf95c10b9811b13ecced089d43ce59c3c8/PyYAML-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44", size = 161980 }, - { url = "https://files.pythonhosted.org/packages/86/0c/c581167fc46d6d6d7ddcfb8c843a4de25bdd27e4466938109ca68492292c/PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab", size = 183873 }, - { url = "https://files.pythonhosted.org/packages/a8/0c/38374f5bb272c051e2a69281d71cba6fdb983413e6758b84482905e29a5d/PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725", size = 173302 }, - { url = "https://files.pythonhosted.org/packages/c3/93/9916574aa8c00aa06bbac729972eb1071d002b8e158bd0e83a3b9a20a1f7/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5", size = 739154 }, - { url = "https://files.pythonhosted.org/packages/95/0f/b8938f1cbd09739c6da569d172531567dbcc9789e0029aa070856f123984/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425", size = 766223 }, - { url = "https://files.pythonhosted.org/packages/b9/2b/614b4752f2e127db5cc206abc23a8c19678e92b23c3db30fc86ab731d3bd/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476", size = 767542 }, - { url = "https://files.pythonhosted.org/packages/d4/00/dd137d5bcc7efea1836d6264f049359861cf548469d18da90cd8216cf05f/PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48", size = 731164 }, - { url = "https://files.pythonhosted.org/packages/c9/1f/4f998c900485e5c0ef43838363ba4a9723ac0ad73a9dc42068b12aaba4e4/PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b", size = 756611 }, - { url = "https://files.pythonhosted.org/packages/df/d1/f5a275fdb252768b7a11ec63585bc38d0e87c9e05668a139fea92b80634c/PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4", size = 140591 }, - { url = "https://files.pythonhosted.org/packages/0c/e8/4f648c598b17c3d06e8753d7d13d57542b30d56e6c2dedf9c331ae56312e/PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8", size = 156338 }, - { url = "https://files.pythonhosted.org/packages/ef/e3/3af305b830494fa85d95f6d95ef7fa73f2ee1cc8ef5b495c7c3269fb835f/PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba", size = 181309 }, - { url = "https://files.pythonhosted.org/packages/45/9f/3b1c20a0b7a3200524eb0076cc027a970d320bd3a6592873c85c92a08731/PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1", size = 171679 }, - { url = "https://files.pythonhosted.org/packages/7c/9a/337322f27005c33bcb656c655fa78325b730324c78620e8328ae28b64d0c/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133", size = 733428 }, - { url = "https://files.pythonhosted.org/packages/a3/69/864fbe19e6c18ea3cc196cbe5d392175b4cf3d5d0ac1403ec3f2d237ebb5/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484", size = 763361 }, - { url = "https://files.pythonhosted.org/packages/04/24/b7721e4845c2f162d26f50521b825fb061bc0a5afcf9a386840f23ea19fa/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5", size = 759523 }, - { url = "https://files.pythonhosted.org/packages/2b/b2/e3234f59ba06559c6ff63c4e10baea10e5e7df868092bf9ab40e5b9c56b6/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc", size = 726660 }, - { url = "https://files.pythonhosted.org/packages/fe/0f/25911a9f080464c59fab9027482f822b86bf0608957a5fcc6eaac85aa515/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652", size = 751597 }, - { url = "https://files.pythonhosted.org/packages/14/0d/e2c3b43bbce3cf6bd97c840b46088a3031085179e596d4929729d8d68270/PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183", size = 140527 }, - { url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446 }, - { url = "https://files.pythonhosted.org/packages/65/d8/b7a1db13636d7fb7d4ff431593c510c8b8fca920ade06ca8ef20015493c5/PyYAML-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d", size = 184777 }, - { url = "https://files.pythonhosted.org/packages/0a/02/6ec546cd45143fdf9840b2c6be8d875116a64076218b61d68e12548e5839/PyYAML-6.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f", size = 172318 }, - { url = "https://files.pythonhosted.org/packages/0e/9a/8cc68be846c972bda34f6c2a93abb644fb2476f4dcc924d52175786932c9/PyYAML-6.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290", size = 720891 }, - { url = "https://files.pythonhosted.org/packages/e9/6c/6e1b7f40181bc4805e2e07f4abc10a88ce4648e7e95ff1abe4ae4014a9b2/PyYAML-6.0.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12", size = 722614 }, - { url = "https://files.pythonhosted.org/packages/3d/32/e7bd8535d22ea2874cef6a81021ba019474ace0d13a4819c2a4bce79bd6a/PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19", size = 737360 }, - { url = "https://files.pythonhosted.org/packages/d7/12/7322c1e30b9be969670b672573d45479edef72c9a0deac3bb2868f5d7469/PyYAML-6.0.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e", size = 699006 }, - { url = "https://files.pythonhosted.org/packages/82/72/04fcad41ca56491995076630c3ec1e834be241664c0c09a64c9a2589b507/PyYAML-6.0.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725", size = 723577 }, - { url = "https://files.pythonhosted.org/packages/ed/5e/46168b1f2757f1fcd442bc3029cd8767d88a98c9c05770d8b420948743bb/PyYAML-6.0.2-cp39-cp39-win32.whl", hash = "sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631", size = 144593 }, - { url = "https://files.pythonhosted.org/packages/19/87/5124b1c1f2412bb95c59ec481eaf936cd32f0fe2a7b16b97b81c4c017a6a/PyYAML-6.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8", size = 162312 }, -] - -[[package]] -name = "rich" -version = "13.9.4" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "markdown-it-py" }, - { name = "pygments" }, - { name = "typing-extensions", marker = "python_full_version < '3.11'" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/ab/3a/0316b28d0761c6734d6bc14e770d85506c986c85ffb239e688eeaab2c2bc/rich-13.9.4.tar.gz", hash = "sha256:439594978a49a09530cff7ebc4b5c7103ef57baf48d5ea3184f21d9a2befa098", size = 223149 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/19/71/39c7c0d87f8d4e6c020a393182060eaefeeae6c01dab6a84ec346f2567df/rich-13.9.4-py3-none-any.whl", hash = "sha256:6049d5e6ec054bf2779ab3358186963bac2ea89175919d699e378b99738c2a90", size = 242424 }, -] - -[[package]] -name = "shellingham" -version = "1.5.4" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/58/15/8b3609fd3830ef7b27b655beb4b4e9c62313a4e8da8c676e142cc210d58e/shellingham-1.5.4.tar.gz", hash = "sha256:8dbca0739d487e5bd35ab3ca4b36e11c4078f3a234bfce294b0a0291363404de", size = 10310 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755 }, -] - -[[package]] -name = "tomli" -version = "2.2.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/18/87/302344fed471e44a87289cf4967697d07e532f2421fdaf868a303cbae4ff/tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff", size = 17175 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/43/ca/75707e6efa2b37c77dadb324ae7d9571cb424e61ea73fad7c56c2d14527f/tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249", size = 131077 }, - { url = "https://files.pythonhosted.org/packages/c7/16/51ae563a8615d472fdbffc43a3f3d46588c264ac4f024f63f01283becfbb/tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6", size = 123429 }, - { url = "https://files.pythonhosted.org/packages/f1/dd/4f6cd1e7b160041db83c694abc78e100473c15d54620083dbd5aae7b990e/tomli-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece47d672db52ac607a3d9599a9d48dcb2f2f735c6c2d1f34130085bb12b112a", size = 226067 }, - { url = "https://files.pythonhosted.org/packages/a9/6b/c54ede5dc70d648cc6361eaf429304b02f2871a345bbdd51e993d6cdf550/tomli-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6972ca9c9cc9f0acaa56a8ca1ff51e7af152a9f87fb64623e31d5c83700080ee", size = 236030 }, - { url = "https://files.pythonhosted.org/packages/1f/47/999514fa49cfaf7a92c805a86c3c43f4215621855d151b61c602abb38091/tomli-2.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c954d2250168d28797dd4e3ac5cf812a406cd5a92674ee4c8f123c889786aa8e", size = 240898 }, - { url = "https://files.pythonhosted.org/packages/73/41/0a01279a7ae09ee1573b423318e7934674ce06eb33f50936655071d81a24/tomli-2.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8dd28b3e155b80f4d54beb40a441d366adcfe740969820caf156c019fb5c7ec4", size = 229894 }, - { url = "https://files.pythonhosted.org/packages/55/18/5d8bc5b0a0362311ce4d18830a5d28943667599a60d20118074ea1b01bb7/tomli-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e59e304978767a54663af13c07b3d1af22ddee3bb2fb0618ca1593e4f593a106", size = 245319 }, - { url = "https://files.pythonhosted.org/packages/92/a3/7ade0576d17f3cdf5ff44d61390d4b3febb8a9fc2b480c75c47ea048c646/tomli-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:33580bccab0338d00994d7f16f4c4ec25b776af3ffaac1ed74e0b3fc95e885a8", size = 238273 }, - { url = "https://files.pythonhosted.org/packages/72/6f/fa64ef058ac1446a1e51110c375339b3ec6be245af9d14c87c4a6412dd32/tomli-2.2.1-cp311-cp311-win32.whl", hash = "sha256:465af0e0875402f1d226519c9904f37254b3045fc5084697cefb9bdde1ff99ff", size = 98310 }, - { url = "https://files.pythonhosted.org/packages/6a/1c/4a2dcde4a51b81be3530565e92eda625d94dafb46dbeb15069df4caffc34/tomli-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2d0f2fdd22b02c6d81637a3c95f8cd77f995846af7414c5c4b8d0545afa1bc4b", size = 108309 }, - { url = "https://files.pythonhosted.org/packages/52/e1/f8af4c2fcde17500422858155aeb0d7e93477a0d59a98e56cbfe75070fd0/tomli-2.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a8f6e44de52d5e6c657c9fe83b562f5f4256d8ebbfe4ff922c495620a7f6cea", size = 132762 }, - { url = "https://files.pythonhosted.org/packages/03/b8/152c68bb84fc00396b83e7bbddd5ec0bd3dd409db4195e2a9b3e398ad2e3/tomli-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d57ca8095a641b8237d5b079147646153d22552f1c637fd3ba7f4b0b29167a8", size = 123453 }, - { url = "https://files.pythonhosted.org/packages/c8/d6/fc9267af9166f79ac528ff7e8c55c8181ded34eb4b0e93daa767b8841573/tomli-2.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e340144ad7ae1533cb897d406382b4b6fede8890a03738ff1683af800d54192", size = 233486 }, - { url = "https://files.pythonhosted.org/packages/5c/51/51c3f2884d7bab89af25f678447ea7d297b53b5a3b5730a7cb2ef6069f07/tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222", size = 242349 }, - { url = "https://files.pythonhosted.org/packages/ab/df/bfa89627d13a5cc22402e441e8a931ef2108403db390ff3345c05253935e/tomli-2.2.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40741994320b232529c802f8bc86da4e1aa9f413db394617b9a256ae0f9a7f77", size = 252159 }, - { url = "https://files.pythonhosted.org/packages/9e/6e/fa2b916dced65763a5168c6ccb91066f7639bdc88b48adda990db10c8c0b/tomli-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:400e720fe168c0f8521520190686ef8ef033fb19fc493da09779e592861b78c6", size = 237243 }, - { url = "https://files.pythonhosted.org/packages/b4/04/885d3b1f650e1153cbb93a6a9782c58a972b94ea4483ae4ac5cedd5e4a09/tomli-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:02abe224de6ae62c19f090f68da4e27b10af2b93213d36cf44e6e1c5abd19fdd", size = 259645 }, - { url = "https://files.pythonhosted.org/packages/9c/de/6b432d66e986e501586da298e28ebeefd3edc2c780f3ad73d22566034239/tomli-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b82ebccc8c8a36f2094e969560a1b836758481f3dc360ce9a3277c65f374285e", size = 244584 }, - { url = "https://files.pythonhosted.org/packages/1c/9a/47c0449b98e6e7d1be6cbac02f93dd79003234ddc4aaab6ba07a9a7482e2/tomli-2.2.1-cp312-cp312-win32.whl", hash = "sha256:889f80ef92701b9dbb224e49ec87c645ce5df3fa2cc548664eb8a25e03127a98", size = 98875 }, - { url = "https://files.pythonhosted.org/packages/ef/60/9b9638f081c6f1261e2688bd487625cd1e660d0a85bd469e91d8db969734/tomli-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4", size = 109418 }, - { url = "https://files.pythonhosted.org/packages/04/90/2ee5f2e0362cb8a0b6499dc44f4d7d48f8fff06d28ba46e6f1eaa61a1388/tomli-2.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f4039b9cbc3048b2416cc57ab3bda989a6fcf9b36cf8937f01a6e731b64f80d7", size = 132708 }, - { url = "https://files.pythonhosted.org/packages/c0/ec/46b4108816de6b385141f082ba99e315501ccd0a2ea23db4a100dd3990ea/tomli-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:286f0ca2ffeeb5b9bd4fcc8d6c330534323ec51b2f52da063b11c502da16f30c", size = 123582 }, - { url = "https://files.pythonhosted.org/packages/a0/bd/b470466d0137b37b68d24556c38a0cc819e8febe392d5b199dcd7f578365/tomli-2.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a92ef1a44547e894e2a17d24e7557a5e85a9e1d0048b0b5e7541f76c5032cb13", size = 232543 }, - { url = "https://files.pythonhosted.org/packages/d9/e5/82e80ff3b751373f7cead2815bcbe2d51c895b3c990686741a8e56ec42ab/tomli-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9316dc65bed1684c9a98ee68759ceaed29d229e985297003e494aa825ebb0281", size = 241691 }, - { url = "https://files.pythonhosted.org/packages/05/7e/2a110bc2713557d6a1bfb06af23dd01e7dde52b6ee7dadc589868f9abfac/tomli-2.2.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e85e99945e688e32d5a35c1ff38ed0b3f41f43fad8df0bdf79f72b2ba7bc5272", size = 251170 }, - { url = "https://files.pythonhosted.org/packages/64/7b/22d713946efe00e0adbcdfd6d1aa119ae03fd0b60ebed51ebb3fa9f5a2e5/tomli-2.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac065718db92ca818f8d6141b5f66369833d4a80a9d74435a268c52bdfa73140", size = 236530 }, - { url = "https://files.pythonhosted.org/packages/38/31/3a76f67da4b0cf37b742ca76beaf819dca0ebef26d78fc794a576e08accf/tomli-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d920f33822747519673ee656a4b6ac33e382eca9d331c87770faa3eef562aeb2", size = 258666 }, - { url = "https://files.pythonhosted.org/packages/07/10/5af1293da642aded87e8a988753945d0cf7e00a9452d3911dd3bb354c9e2/tomli-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a198f10c4d1b1375d7687bc25294306e551bf1abfa4eace6650070a5c1ae2744", size = 243954 }, - { url = "https://files.pythonhosted.org/packages/5b/b9/1ed31d167be802da0fc95020d04cd27b7d7065cc6fbefdd2f9186f60d7bd/tomli-2.2.1-cp313-cp313-win32.whl", hash = "sha256:d3f5614314d758649ab2ab3a62d4f2004c825922f9e370b29416484086b264ec", size = 98724 }, - { url = "https://files.pythonhosted.org/packages/c7/32/b0963458706accd9afcfeb867c0f9175a741bf7b19cd424230714d722198/tomli-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:a38aa0308e754b0e3c67e344754dff64999ff9b513e691d0e786265c93583c69", size = 109383 }, - { url = "https://files.pythonhosted.org/packages/6e/c2/61d3e0f47e2b74ef40a68b9e6ad5984f6241a942f7cd3bbfbdbd03861ea9/tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc", size = 14257 }, -] - -[[package]] -name = "tomli-w" -version = "1.2.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/19/75/241269d1da26b624c0d5e110e8149093c759b7a286138f4efd61a60e75fe/tomli_w-1.2.0.tar.gz", hash = "sha256:2dd14fac5a47c27be9cd4c976af5a12d87fb1f0b4512f81d69cce3b35ae25021", size = 7184 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/c7/18/c86eb8e0202e32dd3df50d43d7ff9854f8e0603945ff398974c1d91ac1ef/tomli_w-1.2.0-py3-none-any.whl", hash = "sha256:188306098d013b691fcadc011abd66727d3c414c571bb01b1a174ba8c983cf90", size = 6675 }, -] - -[[package]] -name = "typer" -version = "0.15.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "click" }, - { name = "rich" }, - { name = "shellingham" }, - { name = "typing-extensions" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/cb/ce/dca7b219718afd37a0068f4f2530a727c2b74a8b6e8e0c0080a4c0de4fcd/typer-0.15.1.tar.gz", hash = "sha256:a0588c0a7fa68a1978a069818657778f86abe6ff5ea6abf472f940a08bfe4f0a", size = 99789 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/d0/cc/0a838ba5ca64dc832aa43f727bd586309846b0ffb2ce52422543e6075e8a/typer-0.15.1-py3-none-any.whl", hash = "sha256:7994fb7b8155b64d3402518560648446072864beefd44aa2dc36972a5972e847", size = 44908 }, -] - -[[package]] -name = "typing-extensions" -version = "4.12.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/df/db/f35a00659bc03fec321ba8bce9420de607a1d37f8342eee1863174c69557/typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8", size = 85321 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d", size = 37438 }, -] - -[[package]] -name = "watchdog" -version = "6.0.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/db/7d/7f3d619e951c88ed75c6037b246ddcf2d322812ee8ea189be89511721d54/watchdog-6.0.0.tar.gz", hash = "sha256:9ddf7c82fda3ae8e24decda1338ede66e1c99883db93711d8fb941eaa2d8c282", size = 131220 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/0c/56/90994d789c61df619bfc5ce2ecdabd5eeff564e1eb47512bd01b5e019569/watchdog-6.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d1cdb490583ebd691c012b3d6dae011000fe42edb7a82ece80965b42abd61f26", size = 96390 }, - { url = "https://files.pythonhosted.org/packages/55/46/9a67ee697342ddf3c6daa97e3a587a56d6c4052f881ed926a849fcf7371c/watchdog-6.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bc64ab3bdb6a04d69d4023b29422170b74681784ffb9463ed4870cf2f3e66112", size = 88389 }, - { url = "https://files.pythonhosted.org/packages/44/65/91b0985747c52064d8701e1075eb96f8c40a79df889e59a399453adfb882/watchdog-6.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c897ac1b55c5a1461e16dae288d22bb2e412ba9807df8397a635d88f671d36c3", size = 89020 }, - { url = "https://files.pythonhosted.org/packages/e0/24/d9be5cd6642a6aa68352ded4b4b10fb0d7889cb7f45814fb92cecd35f101/watchdog-6.0.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6eb11feb5a0d452ee41f824e271ca311a09e250441c262ca2fd7ebcf2461a06c", size = 96393 }, - { url = "https://files.pythonhosted.org/packages/63/7a/6013b0d8dbc56adca7fdd4f0beed381c59f6752341b12fa0886fa7afc78b/watchdog-6.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ef810fbf7b781a5a593894e4f439773830bdecb885e6880d957d5b9382a960d2", size = 88392 }, - { url = "https://files.pythonhosted.org/packages/d1/40/b75381494851556de56281e053700e46bff5b37bf4c7267e858640af5a7f/watchdog-6.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:afd0fe1b2270917c5e23c2a65ce50c2a4abb63daafb0d419fde368e272a76b7c", size = 89019 }, - { url = "https://files.pythonhosted.org/packages/39/ea/3930d07dafc9e286ed356a679aa02d777c06e9bfd1164fa7c19c288a5483/watchdog-6.0.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:bdd4e6f14b8b18c334febb9c4425a878a2ac20efd1e0b231978e7b150f92a948", size = 96471 }, - { url = "https://files.pythonhosted.org/packages/12/87/48361531f70b1f87928b045df868a9fd4e253d9ae087fa4cf3f7113be363/watchdog-6.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c7c15dda13c4eb00d6fb6fc508b3c0ed88b9d5d374056b239c4ad1611125c860", size = 88449 }, - { url = "https://files.pythonhosted.org/packages/5b/7e/8f322f5e600812e6f9a31b75d242631068ca8f4ef0582dd3ae6e72daecc8/watchdog-6.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6f10cb2d5902447c7d0da897e2c6768bca89174d0c6e1e30abec5421af97a5b0", size = 89054 }, - { url = "https://files.pythonhosted.org/packages/68/98/b0345cabdce2041a01293ba483333582891a3bd5769b08eceb0d406056ef/watchdog-6.0.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:490ab2ef84f11129844c23fb14ecf30ef3d8a6abafd3754a6f75ca1e6654136c", size = 96480 }, - { url = "https://files.pythonhosted.org/packages/85/83/cdf13902c626b28eedef7ec4f10745c52aad8a8fe7eb04ed7b1f111ca20e/watchdog-6.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:76aae96b00ae814b181bb25b1b98076d5fc84e8a53cd8885a318b42b6d3a5134", size = 88451 }, - { url = "https://files.pythonhosted.org/packages/fe/c4/225c87bae08c8b9ec99030cd48ae9c4eca050a59bf5c2255853e18c87b50/watchdog-6.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a175f755fc2279e0b7312c0035d52e27211a5bc39719dd529625b1930917345b", size = 89057 }, - { url = "https://files.pythonhosted.org/packages/05/52/7223011bb760fce8ddc53416beb65b83a3ea6d7d13738dde75eeb2c89679/watchdog-6.0.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e6f0e77c9417e7cd62af82529b10563db3423625c5fce018430b249bf977f9e8", size = 96390 }, - { url = "https://files.pythonhosted.org/packages/9c/62/d2b21bc4e706d3a9d467561f487c2938cbd881c69f3808c43ac1ec242391/watchdog-6.0.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:90c8e78f3b94014f7aaae121e6b909674df5b46ec24d6bebc45c44c56729af2a", size = 88386 }, - { url = "https://files.pythonhosted.org/packages/ea/22/1c90b20eda9f4132e4603a26296108728a8bfe9584b006bd05dd94548853/watchdog-6.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e7631a77ffb1f7d2eefa4445ebbee491c720a5661ddf6df3498ebecae5ed375c", size = 89017 }, - { url = "https://files.pythonhosted.org/packages/30/ad/d17b5d42e28a8b91f8ed01cb949da092827afb9995d4559fd448d0472763/watchdog-6.0.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c7ac31a19f4545dd92fc25d200694098f42c9a8e391bc00bdd362c5736dbf881", size = 87902 }, - { url = "https://files.pythonhosted.org/packages/5c/ca/c3649991d140ff6ab67bfc85ab42b165ead119c9e12211e08089d763ece5/watchdog-6.0.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9513f27a1a582d9808cf21a07dae516f0fab1cf2d7683a742c498b93eedabb11", size = 88380 }, - { url = "https://files.pythonhosted.org/packages/5b/79/69f2b0e8d3f2afd462029031baafb1b75d11bb62703f0e1022b2e54d49ee/watchdog-6.0.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7a0e56874cfbc4b9b05c60c8a1926fedf56324bb08cfbc188969777940aef3aa", size = 87903 }, - { url = "https://files.pythonhosted.org/packages/e2/2b/dc048dd71c2e5f0f7ebc04dd7912981ec45793a03c0dc462438e0591ba5d/watchdog-6.0.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:e6439e374fc012255b4ec786ae3c4bc838cd7309a540e5fe0952d03687d8804e", size = 88381 }, - { url = "https://files.pythonhosted.org/packages/a9/c7/ca4bf3e518cb57a686b2feb4f55a1892fd9a3dd13f470fca14e00f80ea36/watchdog-6.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7607498efa04a3542ae3e05e64da8202e58159aa1fa4acddf7678d34a35d4f13", size = 79079 }, - { url = "https://files.pythonhosted.org/packages/5c/51/d46dc9332f9a647593c947b4b88e2381c8dfc0942d15b8edc0310fa4abb1/watchdog-6.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:9041567ee8953024c83343288ccc458fd0a2d811d6a0fd68c4c22609e3490379", size = 79078 }, - { url = "https://files.pythonhosted.org/packages/d4/57/04edbf5e169cd318d5f07b4766fee38e825d64b6913ca157ca32d1a42267/watchdog-6.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:82dc3e3143c7e38ec49d61af98d6558288c415eac98486a5c581726e0737c00e", size = 79076 }, - { url = "https://files.pythonhosted.org/packages/ab/cc/da8422b300e13cb187d2203f20b9253e91058aaf7db65b74142013478e66/watchdog-6.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:212ac9b8bf1161dc91bd09c048048a95ca3a4c4f5e5d4a7d1b1a7d5752a7f96f", size = 79077 }, - { url = "https://files.pythonhosted.org/packages/2c/3b/b8964e04ae1a025c44ba8e4291f86e97fac443bca31de8bd98d3263d2fcf/watchdog-6.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:e3df4cbb9a450c6d49318f6d14f4bbc80d763fa587ba46ec86f99f9e6876bb26", size = 79078 }, - { url = "https://files.pythonhosted.org/packages/62/ae/a696eb424bedff7407801c257d4b1afda455fe40821a2be430e173660e81/watchdog-6.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:2cce7cfc2008eb51feb6aab51251fd79b85d9894e98ba847408f662b3395ca3c", size = 79077 }, - { url = "https://files.pythonhosted.org/packages/b5/e8/dbf020b4d98251a9860752a094d09a65e1b436ad181faf929983f697048f/watchdog-6.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:20ffe5b202af80ab4266dcd3e91aae72bf2da48c0d33bdb15c66658e685e94e2", size = 79078 }, - { url = "https://files.pythonhosted.org/packages/07/f6/d0e5b343768e8bcb4cda79f0f2f55051bf26177ecd5651f84c07567461cf/watchdog-6.0.0-py3-none-win32.whl", hash = "sha256:07df1fdd701c5d4c8e55ef6cf55b8f0120fe1aef7ef39a1c6fc6bc2e606d517a", size = 79065 }, - { url = "https://files.pythonhosted.org/packages/db/d9/c495884c6e548fce18a8f40568ff120bc3a4b7b99813081c8ac0c936fa64/watchdog-6.0.0-py3-none-win_amd64.whl", hash = "sha256:cbafb470cf848d93b5d013e2ecb245d4aa1c8fd0504e863ccefa32445359d680", size = 79070 }, - { url = "https://files.pythonhosted.org/packages/33/e8/e40370e6d74ddba47f002a32919d91310d6074130fe4e17dabcafc15cbf1/watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f", size = 79067 }, -]