Tutorial

Novas atualizações e melhorias para a Macfleet.

Aviso importante

Os exemplos de código e scripts fornecidos nestes tutoriais são apenas para fins educacionais. A Macfleet não é responsável por quaisquer problemas, danos ou vulnerabilidades de segurança que possam surgir do uso, modificação ou implementação destes exemplos. Sempre revise e teste o código em um ambiente seguro antes de usá-lo em sistemas de produção.

File Size and Storage Analysis on macOS

Master file size analysis and storage management on your MacFleet devices using powerful disk usage tools. This tutorial covers basic size queries, storage analysis, disk usage monitoring, and advanced storage optimization techniques for effective fleet storage management.

Understanding macOS Storage Analysis

macOS provides several command-line tools for storage analysis:

  • du - Disk usage analysis for files and directories
  • df - File system disk space usage
  • ls - File listing with size information
  • find - Search files by size criteria
  • stat - Detailed file information including size

Basic File and Folder Size Operations

Find Single File Size

#!/bin/bash

# Get size of a specific file
FILE_PATH="/Users/stefan/Desktop/Document.txt"

echo "File size for: $FILE_PATH"
du -h "$FILE_PATH"

echo "File size analysis completed"

Find Folder Size

#!/bin/bash

# Get size of a folder
FOLDER_PATH="/Users/stefan/Desktop/Wallpapers"

echo "Folder size for: $FOLDER_PATH"
du -h "$FOLDER_PATH"

echo "Folder size analysis completed"

Detailed Size Information

#!/bin/bash

# Get detailed size information with options
analyze_size() {
    local target_path="$1"
    
    if [[ ! -e "$target_path" ]]; then
        echo "Error: Path not found: $target_path"
        return 1
    fi
    
    echo "=== Size Analysis for: $target_path ==="
    
    # Basic size
    echo "Total size:"
    du -h "$target_path"
    
    # Size with all files listed
    echo -e "\nDetailed breakdown:"
    du -ah "$target_path" | head -20
    
    # Grand total with summary
    echo -e "\nSummary with total:"
    du -ch "$target_path"
}

# Usage
analyze_size "/Users/stefan/Desktop/Document.txt"

Advanced Storage Analysis

Directory Size Breakdown

#!/bin/bash

# Analyze directory sizes with sorting
directory_analysis() {
    local base_path="$1"
    local max_depth="${2:-1}"
    
    echo "=== Directory Size Analysis: $base_path ==="
    
    # Top-level directory sizes, sorted by size
    echo "Largest directories:"
    du -h -d "$max_depth" "$base_path" 2>/dev/null | sort -hr | head -20
    
    # Count of items
    echo -e "\nItem counts:"
    echo "Files: $(find "$base_path" -type f 2>/dev/null | wc -l)"
    echo "Directories: $(find "$base_path" -type d 2>/dev/null | wc -l)"
    
    # File type analysis
    echo -e "\nFile type distribution:"
    find "$base_path" -type f 2>/dev/null | grep '\.' | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr | head -10
}

# Usage
directory_analysis "/Users" 2

Large File Detection

#!/bin/bash

# Find large files in directory
find_large_files() {
    local search_path="$1"
    local min_size="${2:-100M}"
    
    echo "=== Large Files Analysis (>$min_size) in: $search_path ==="
    
    # Find large files
    echo "Large files found:"
    find "$search_path" -type f -size +"$min_size" -exec ls -lh {} \; 2>/dev/null | sort -k5 -hr
    
    # Count and total size
    local large_files_count=$(find "$search_path" -type f -size +"$min_size" 2>/dev/null | wc -l)
    echo -e "\nTotal large files: $large_files_count"
    
    if [[ $large_files_count -gt 0 ]]; then
        echo "Total size of large files:"
        find "$search_path" -type f -size +"$min_size" -exec du -ch {} + 2>/dev/null | tail -1
    fi
}

# Usage
find_large_files "/Users" "50M"

Storage Usage by File Type

#!/bin/bash

# Analyze storage usage by file extension
storage_by_type() {
    local search_path="$1"
    
    echo "=== Storage Usage by File Type: $search_path ==="
    
    # Create temporary file for analysis
    local temp_file="/tmp/file_analysis_$$"
    
    # Get all files with extensions and their sizes
    find "$search_path" -type f -name "*.*" -exec sh -c '
        for file; do
            ext=$(echo "$file" | rev | cut -d. -f1 | rev | tr "[:upper:]" "[:lower:]")
            size=$(stat -f%z "$file" 2>/dev/null || echo 0)
            echo "$ext $size"
        done
    ' _ {} + > "$temp_file"
    
    # Summarize by extension
    echo "Top file types by total size:"
    awk '{
        ext_size[$1] += $2
        ext_count[$1]++
    } END {
        for (ext in ext_size) {
            printf "%-10s %15s %10d files\n", ext, ext_size[ext], ext_count[ext]
        }
    }' "$temp_file" | sort -k2 -nr | head -15 | \
    while read ext size count; do
        # Convert bytes to human readable
        if [[ $size -gt 1073741824 ]]; then
            printf "%-10s %10.2f GB %10d files\n" "$ext" $(echo "scale=2; $size/1073741824" | bc) "$count"
        elif [[ $size -gt 1048576 ]]; then
            printf "%-10s %10.2f MB %10d files\n" "$ext" $(echo "scale=2; $size/1048576" | bc) "$count"
        else
            printf "%-10s %10.2f KB %10d files\n" "$ext" $(echo "scale=2; $size/1024" | bc) "$count"
        fi
    done
    
    # Cleanup
    rm -f "$temp_file"
}

# Usage
storage_by_type "/Users"

Disk Space Monitoring

System Disk Usage

#!/bin/bash

# Comprehensive disk usage report
system_disk_report() {
    echo "=== System Disk Usage Report ==="
    
    # File system usage
    echo "File System Usage:"
    df -h
    
    echo -e "\nDisk usage by mount point:"
    df -h | awk 'NR>1 {print $5 " " $1}' | while read output; do
        usage=$(echo $output | awk '{print $1}' | cut -d'%' -f1)
        partition=$(echo $output | awk '{print $2}')
        if [[ $usage -ge 80 ]]; then
            echo "⚠️  $partition: ${usage}% (WARNING)"
        elif [[ $usage -ge 90 ]]; then
            echo "🚨 $partition: ${usage}% (CRITICAL)"
        else
            echo "✅ $partition: ${usage}%"
        fi
    done
    
    # Top directories by size
    echo -e "\nTop directories by size:"
    du -h / 2>/dev/null | sort -hr | head -10
}

# Run system report
system_disk_report

Storage Trend Analysis

#!/bin/bash

# Monitor storage trends over time
storage_trend_monitor() {
    local target_path="$1"
    local log_file="/var/log/macfleet_storage_trends.log"
    
    echo "=== Storage Trend Monitoring: $target_path ==="
    
    # Current timestamp and size
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    local current_size=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    
    # Log current measurement
    echo "$timestamp,$target_path,$current_size" >> "$log_file"
    
    # Show recent trends if log exists
    if [[ -f "$log_file" ]]; then
        echo "Recent storage trends (last 10 measurements):"
        echo "Timestamp                | Path              | Size (KB)    | Change"
        echo "-------------------------|-------------------|--------------|--------"
        
        tail -10 "$log_file" | while IFS=',' read -r time path size; do
            # Calculate change from previous measurement
            printf "%-24s | %-17s | %12s | \n" "$time" "$(basename "$path")" "$size"
        done
        
        # Calculate growth rate
        local first_size=$(head -1 "$log_file" | cut -d',' -f3)
        local growth=$((current_size - first_size))
        local growth_percent=$(echo "scale=2; ($growth * 100) / $first_size" | bc 2>/dev/null || echo "0")
        
        echo -e "\nGrowth since first measurement: ${growth} KB (${growth_percent}%)"
    fi
}

# Usage
storage_trend_monitor "/Users"

Enterprise Storage Management System

#!/bin/bash

# MacFleet Storage Management Tool
# Comprehensive storage analysis and optimization for fleet devices

# Configuration
SCRIPT_VERSION="1.0.0"
LOG_FILE="/var/log/macfleet_storage.log"
REPORT_DIR="/etc/macfleet/reports/storage"
CONFIG_DIR="/etc/macfleet/storage"
ALERT_DIR="$CONFIG_DIR/alerts"

# Create directories if they don't exist
mkdir -p "$REPORT_DIR" "$CONFIG_DIR" "$ALERT_DIR"

# Storage categories for analysis
declare -A STORAGE_CATEGORIES=(
    ["system_critical"]="/System,/Library,/usr,/bin,/sbin"
    ["user_data"]="/Users,/home"
    ["applications"]="/Applications"
    ["caches"]="/Library/Caches,/Users/*/Library/Caches"
    ["logs"]="/var/log,/Library/Logs,/Users/*/Library/Logs"
    ["downloads"]="/Users/*/Downloads"
    ["documents"]="/Users/*/Documents,/Users/*/Desktop"
    ["media"]="/Users/*/Movies,/Users/*/Music,/Users/*/Pictures"
    ["temporary"]="/tmp,/var/tmp,/Users/*/Library/Application Support/*/Cache"
    ["development"]="/usr/local,/opt,/Users/*/Development"
)

# Storage thresholds and policies
declare -A STORAGE_POLICIES=(
    ["space_warning"]="80,85,90,95"
    ["large_file_threshold"]="100M,500M,1G,5G"
    ["old_file_threshold"]="30,90,180,365"
    ["cache_cleanup_age"]="7,14,30,60"
    ["temp_cleanup_age"]="1,3,7,14"
    ["log_retention_days"]="30,60,90,180"
)

# Logging function
log_action() {
    local message="$1"
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "[$timestamp] $message" | tee -a "$LOG_FILE"
}

# Advanced storage analysis engine
analyze_storage_comprehensive() {
    local target_path="$1"
    local analysis_level="${2:-standard}"
    local category="${3:-general}"
    
    log_action "Starting comprehensive storage analysis: $target_path (Level: $analysis_level)"
    
    if [[ ! -e "$target_path" ]]; then
        log_action "ERROR: Target path does not exist: $target_path"
        return 1
    fi
    
    local report_file="$REPORT_DIR/storage_analysis_$(echo "$target_path" | tr '/' '_')_$(date +%Y%m%d_%H%M%S).json"
    
    # Initialize report
    cat > "$report_file" << EOF
{
    "analysis_info": {
        "target_path": "$target_path",
        "analysis_level": "$analysis_level",
        "category": "$category",
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "script_version": "$SCRIPT_VERSION"
    },
    "storage_metrics": {},
    "file_analysis": {},
    "recommendations": [],
    "alerts": []
}
EOF
    
    # Basic storage metrics
    local total_size_kb=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    local total_files=$(find "$target_path" -type f 2>/dev/null | wc -l)
    local total_dirs=$(find "$target_path" -type d 2>/dev/null | wc -l)
    local largest_file_size=$(find "$target_path" -type f -exec stat -f%z {} \; 2>/dev/null | sort -nr | head -1)
    
    # Update report with basic metrics
    jq --argjson total_size_kb "$total_size_kb" \
       --argjson total_files "$total_files" \
       --argjson total_dirs "$((total_dirs - 1))" \
       --argjson largest_file "$largest_file_size" \
       '.storage_metrics = {
           "total_size_kb": $total_size_kb,
           "total_size_mb": ($total_size_kb / 1024),
           "total_size_gb": ($total_size_kb / 1048576),
           "total_files": $total_files,
           "total_directories": $total_dirs,
           "largest_file_bytes": $largest_file,
           "average_file_size": (if $total_files > 0 then ($total_size_kb * 1024 / $total_files) else 0 end)
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Perform analysis based on level
    case "$analysis_level" in
        "quick")
            perform_quick_storage_analysis "$target_path" "$report_file"
            ;;
        "standard")
            perform_standard_storage_analysis "$target_path" "$report_file"
            ;;
        "comprehensive")
            perform_comprehensive_storage_analysis "$target_path" "$report_file"
            ;;
        "optimization")
            perform_optimization_analysis "$target_path" "$report_file"
            ;;
        "security")
            perform_security_storage_analysis "$target_path" "$report_file"
            ;;
    esac
    
    log_action "Storage analysis completed: $report_file"
    echo "$report_file"
}

# Quick storage analysis
perform_quick_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing quick storage analysis..."
    
    # Top-level directory sizes
    local dir_sizes=$(du -h -d 1 "$target_path" 2>/dev/null | sort -hr | head -10)
    
    # Large files (>100MB)
    local large_files=$(find "$target_path" -type f -size +100M 2>/dev/null | head -10)
    
    # Update report
    jq --arg dir_sizes "$dir_sizes" \
       --arg large_files "$large_files" \
       '.file_analysis.quick = {
           "directory_sizes": $dir_sizes,
           "large_files_100mb": $large_files
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Standard storage analysis
perform_standard_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing standard storage analysis..."
    
    # File age analysis
    local recent_files=$(find "$target_path" -type f -mtime -7 2>/dev/null | wc -l)
    local old_files=$(find "$target_path" -type f -mtime +365 2>/dev/null | wc -l)
    
    # File size distribution
    local small_files=$(find "$target_path" -type f -size -1k 2>/dev/null | wc -l)
    local medium_files=$(find "$target_path" -type f -size +1k -size -1M 2>/dev/null | wc -l)
    local large_files=$(find "$target_path" -type f -size +1M -size -100M 2>/dev/null | wc -l)
    local huge_files=$(find "$target_path" -type f -size +100M 2>/dev/null | wc -l)
    
    # Empty files and directories
    local empty_files=$(find "$target_path" -type f -empty 2>/dev/null | wc -l)
    local empty_dirs=$(find "$target_path" -type d -empty 2>/dev/null | wc -l)
    
    # Update report
    jq --argjson recent_files "$recent_files" \
       --argjson old_files "$old_files" \
       --argjson small_files "$small_files" \
       --argjson medium_files "$medium_files" \
       --argjson large_files "$large_files" \
       --argjson huge_files "$huge_files" \
       --argjson empty_files "$empty_files" \
       --argjson empty_dirs "$empty_dirs" \
       '.file_analysis.standard = {
           "file_age": {
               "recent_files": $recent_files,
               "old_files": $old_files
           },
           "file_sizes": {
               "small_1kb": $small_files,
               "medium_1kb_1mb": $medium_files,
               "large_1mb_100mb": $large_files,
               "huge_100mb_plus": $huge_files
           },
           "empty_items": {
               "empty_files": $empty_files,
               "empty_directories": $empty_dirs
           }
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Comprehensive storage analysis
perform_comprehensive_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing comprehensive storage analysis..."
    
    # Duplicate file detection
    local duplicates_temp="/tmp/duplicates_$$"
    find "$target_path" -type f -exec md5 {} \; 2>/dev/null | sort | uniq -d -w 32 > "$duplicates_temp"
    local duplicate_count=$(wc -l < "$duplicates_temp")
    
    # File extension analysis
    local extensions_temp="/tmp/extensions_$$"
    find "$target_path" -type f -name "*.*" 2>/dev/null | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr > "$extensions_temp"
    
    # Access pattern analysis
    local never_accessed=$(find "$target_path" -type f -atime +365 2>/dev/null | wc -l)
    local recently_accessed=$(find "$target_path" -type f -atime -1 2>/dev/null | wc -l)
    
    # Directory depth analysis
    local max_depth=$(find "$target_path" -type d 2>/dev/null | awk -F/ '{print NF}' | sort -n | tail -1)
    local current_depth=$(echo "$target_path" | tr -cd '/' | wc -c)
    local relative_depth=$((max_depth - current_depth))
    
    # Update report
    jq --argjson duplicate_count "$duplicate_count" \
       --argjson never_accessed "$never_accessed" \
       --argjson recently_accessed "$recently_accessed" \
       --argjson max_depth "$relative_depth" \
       '.file_analysis.comprehensive = {
           "duplicates": {
               "duplicate_file_sets": $duplicate_count
           },
           "access_patterns": {
               "never_accessed": $never_accessed,
               "recently_accessed": $recently_accessed
           },
           "structure": {
               "maximum_depth": $max_depth
           }
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Cleanup temp files
    rm -f "$duplicates_temp" "$extensions_temp"
}

# Optimization analysis
perform_optimization_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing optimization analysis..."
    
    # Cache files
    local cache_files=$(find "$target_path" -path "*/Cache*" -o -name "*cache*" 2>/dev/null | wc -l)
    local cache_size=$(find "$target_path" -path "*/Cache*" -o -name "*cache*" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Log files
    local log_files=$(find "$target_path" -name "*.log" 2>/dev/null | wc -l)
    local log_size=$(find "$target_path" -name "*.log" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Temporary files
    local temp_files=$(find "$target_path" -name "*.tmp" -o -name "*.temp" -o -path "*/tmp/*" 2>/dev/null | wc -l)
    local temp_size=$(find "$target_path" -name "*.tmp" -o -name "*.temp" -o -path "*/tmp/*" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Backup files
    local backup_files=$(find "$target_path" -name "*.bak" -o -name "*~" -o -name "*.backup" 2>/dev/null | wc -l)
    local backup_size=$(find "$target_path" -name "*.bak" -o -name "*~" -o -name "*.backup" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Potential savings calculation
    local total_cleanup_kb=$((cache_size + log_size + temp_size + backup_size))
    local total_cleanup_mb=$((total_cleanup_kb / 1024))
    
    # Generate recommendations
    local recommendations=()
    [[ $cache_size -gt 102400 ]] && recommendations+=("Consider cleaning cache files (${cache_size} KB)")
    [[ $log_size -gt 51200 ]] && recommendations+=("Archive or clean old log files (${log_size} KB)")
    [[ $temp_size -gt 10240 ]] && recommendations+=("Remove temporary files (${temp_size} KB)")
    [[ $backup_size -gt 102400 ]] && recommendations+=("Review backup file retention (${backup_size} KB)")
    
    # Update report
    jq --argjson cache_files "$cache_files" \
       --argjson cache_size "$cache_size" \
       --argjson log_files "$log_files" \
       --argjson log_size "$log_size" \
       --argjson temp_files "$temp_files" \
       --argjson temp_size "$temp_size" \
       --argjson backup_files "$backup_files" \
       --argjson backup_size "$backup_size" \
       --argjson total_cleanup "$total_cleanup_kb" \
       --argjson recommendations "$(printf '%s\n' "${recommendations[@]}" | jq -R . | jq -s .)" \
       '.file_analysis.optimization = {
           "cache_files": {"count": $cache_files, "size_kb": $cache_size},
           "log_files": {"count": $log_files, "size_kb": $log_size},
           "temp_files": {"count": $temp_files, "size_kb": $temp_size},
           "backup_files": {"count": $backup_files, "size_kb": $backup_size},
           "potential_savings_kb": $total_cleanup,
           "potential_savings_mb": ($total_cleanup / 1024)
       } | .recommendations = $recommendations' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Fleet storage management
manage_fleet_storage() {
    local action="$1"
    local target_pattern="$2"
    local threshold="$3"
    
    log_action "Fleet storage management: $action"
    
    case "$action" in
        "audit")
            audit_fleet_storage "$target_pattern"
            ;;
        "cleanup")
            cleanup_fleet_storage "$target_pattern" "$threshold"
            ;;
        "monitor")
            monitor_fleet_storage
            ;;
        "report")
            generate_fleet_storage_report
            ;;
    esac
}

# Generate fleet storage report
generate_fleet_storage_report() {
    local fleet_report="$REPORT_DIR/fleet_storage_report_$(date +%Y%m%d_%H%M%S).json"
    
    echo "Generating fleet storage report..."
    
    # System overview
    local total_disk_space=$(df -k / | awk 'NR==2 {print $2}')
    local used_disk_space=$(df -k / | awk 'NR==2 {print $3}')
    local available_disk_space=$(df -k / | awk 'NR==2 {print $4}')
    local disk_usage_percent=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
    
    cat > "$fleet_report" << EOF
{
    "fleet_storage_report": {
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "system_overview": {
            "total_disk_kb": $total_disk_space,
            "used_disk_kb": $used_disk_space,
            "available_disk_kb": $available_disk_space,
            "usage_percent": $disk_usage_percent
        },
        "category_analysis": {},
        "alerts": [],
        "recommendations": []
    }
}
EOF
    
    # Analyze each storage category
    for category in "${!STORAGE_CATEGORIES[@]}"; do
        echo "Analyzing category: $category"
        IFS=',' read -ra PATHS <<< "${STORAGE_CATEGORIES[$category]}"
        local category_size=0
        local category_files=0
        
        for path in "${PATHS[@]}"; do
            if [[ -d "$path" ]]; then
                local path_size=$(du -sk "$path" 2>/dev/null | cut -f1)
                local path_files=$(find "$path" -type f 2>/dev/null | wc -l)
                category_size=$((category_size + path_size))
                category_files=$((category_files + path_files))
            fi
        done
        
        # Update report with category data
        jq --arg category "$category" \
           --argjson size "$category_size" \
           --argjson files "$category_files" \
           '.fleet_storage_report.category_analysis[$category] = {
               "size_kb": $size,
               "file_count": $files,
               "size_mb": ($size / 1024),
               "size_gb": ($size / 1048576)
           }' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
    done
    
    # Generate alerts based on thresholds
    local alerts=()
    if [[ $disk_usage_percent -gt 90 ]]; then
        alerts+=("CRITICAL: Disk usage above 90% ($disk_usage_percent%)")
    elif [[ $disk_usage_percent -gt 80 ]]; then
        alerts+=("WARNING: Disk usage above 80% ($disk_usage_percent%)")
    fi
    
    # Add alerts to report
    if [[ ${#alerts[@]} -gt 0 ]]; then
        jq --argjson alerts "$(printf '%s\n' "${alerts[@]}" | jq -R . | jq -s .)" \
           '.fleet_storage_report.alerts = $alerts' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
    fi
    
    log_action "Fleet storage report generated: $fleet_report"
    echo "$fleet_report"
}

# Main execution function
main() {
    local action="${1:-analyze}"
    local target="${2:-/Users}"
    local level="${3:-standard}"
    local options="${4:-}"
    
    log_action "=== MacFleet Storage Management Started ==="
    log_action "Action: $action, Target: $target, Level: $level"
    
    case "$action" in
        "analyze")
            analyze_storage_comprehensive "$target" "$level"
            ;;
        "size")
            echo "File/Folder size:"
            du -h "$target"
            ;;
        "fleet")
            manage_fleet_storage "$level" "$target" "$options"
            ;;
        "report")
            generate_fleet_storage_report
            ;;
        "help")
            echo "Usage: $0 [action] [target] [level] [options]"
            echo "Actions:"
            echo "  analyze <path> [level] - Comprehensive storage analysis"
            echo "  size <path> - Simple size check"
            echo "  fleet <action> [pattern] [threshold] - Fleet management"
            echo "  report - Generate fleet storage report"
            echo "  help - Show this help"
            echo ""
            echo "Analysis levels: quick, standard, comprehensive, optimization, security"
            echo "Fleet actions: audit, cleanup, monitor, report"
            ;;
        *)
            log_action "ERROR: Unknown action: $action"
            exit 1
            ;;
    esac
    
    log_action "=== Storage management completed ==="
}

# Execute main function
main "$@"

Common Storage Analysis Tasks

Directory Comparison

#!/bin/bash

# Compare sizes of multiple directories
compare_directories() {
    local directories=("$@")
    
    echo "=== Directory Size Comparison ==="
    printf "%-40s %15s %15s\n" "Directory" "Size" "Files"
    echo "--------------------------------------------------------------------------------"
    
    for dir in "${directories[@]}"; do
        if [[ -d "$dir" ]]; then
            local size=$(du -sh "$dir" 2>/dev/null | cut -f1)
            local files=$(find "$dir" -type f 2>/dev/null | wc -l)
            printf "%-40s %15s %15d\n" "$dir" "$size" "$files"
        else
            printf "%-40s %15s %15s\n" "$dir" "N/A" "N/A"
        fi
    done
}

# Usage
compare_directories "/Users/stefan/Documents" "/Users/stefan/Downloads" "/Users/stefan/Desktop"

Storage Growth Tracking

#!/bin/bash

# Track storage growth over time
track_storage_growth() {
    local target_path="$1"
    local log_file="/var/log/storage_growth.csv"
    
    # Create header if file doesn't exist
    if [[ ! -f "$log_file" ]]; then
        echo "timestamp,path,size_kb,files,directories" > "$log_file"
    fi
    
    # Get current metrics
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    local size_kb=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    local files=$(find "$target_path" -type f 2>/dev/null | wc -l)
    local dirs=$(find "$target_path" -type d 2>/dev/null | wc -l)
    
    # Log current measurement
    echo "$timestamp,$target_path,$size_kb,$files,$dirs" >> "$log_file"
    
    echo "Storage growth logged for: $target_path"
    echo "Current size: $(echo "scale=2; $size_kb/1024" | bc) MB"
    echo "Files: $files, Directories: $dirs"
}

# Usage
track_storage_growth "/Users/stefan"

Cleanup Recommendations

#!/bin/bash

# Generate cleanup recommendations
generate_cleanup_recommendations() {
    local target_path="$1"
    
    echo "=== Cleanup Recommendations for: $target_path ==="
    
    # Old downloads
    local old_downloads=$(find "$target_path" -path "*/Downloads/*" -type f -mtime +30 2>/dev/null | wc -l)
    if [[ $old_downloads -gt 0 ]]; then
        echo "📁 Found $old_downloads files in Downloads older than 30 days"
    fi
    
    # Large cache files
    local cache_size=$(find "$target_path" -path "*/Cache*" -type f -size +10M 2>/dev/null | wc -l)
    if [[ $cache_size -gt 0 ]]; then
        echo "🗂️  Found $cache_size large cache files (>10MB)"
    fi
    
    # Duplicate files
    local duplicates=$(find "$target_path" -type f -exec md5 {} \; 2>/dev/null | sort | uniq -d -w 32 | wc -l)
    if [[ $duplicates -gt 0 ]]; then
        echo "📄 Found approximately $duplicates duplicate files"
    fi
    
    # Empty directories
    local empty_dirs=$(find "$target_path" -type d -empty 2>/dev/null | wc -l)
    if [[ $empty_dirs -gt 0 ]]; then
        echo "📂 Found $empty_dirs empty directories"
    fi
    
    # Log files
    local large_logs=$(find "$target_path" -name "*.log" -size +50M 2>/dev/null | wc -l)
    if [[ $large_logs -gt 0 ]]; then
        echo "📋 Found $large_logs large log files (>50MB)"
    fi
    
    echo -e "\n💡 Consider running cleanup operations for these items"
}

# Usage
generate_cleanup_recommendations "/Users"

Important Notes

  • File paths with spaces - Use quotes or escape characters: du -h "/path/with spaces"
  • Permission requirements - Some directories require admin access
  • Performance impact - Large directory scans can be resource-intensive
  • Regular monitoring - Set up automated storage monitoring for fleet management
  • Backup before cleanup - Always backup important data before cleanup operations
  • Test scripts on sample directories before fleet-wide deployment

Tutorial

Novas atualizações e melhorias para a Macfleet.

Configurando um Runner do GitHub Actions em um Mac Mini (Apple Silicon)

Runner do GitHub Actions

GitHub Actions é uma plataforma poderosa de CI/CD que permite automatizar seus fluxos de trabalho de desenvolvimento de software. Embora o GitHub ofereça runners hospedados, runners auto-hospedados fornecem maior controle e personalização para sua configuração de CI/CD. Este tutorial o guia através da configuração e conexão de um runner auto-hospedado em um Mac mini para executar pipelines do macOS.

Pré-requisitos

Antes de começar, certifique-se de ter:

  • Um Mac mini (registre-se no Macfleet)
  • Um repositório GitHub com direitos de administrador
  • Um gerenciador de pacotes instalado (preferencialmente Homebrew)
  • Git instalado em seu sistema

Passo 1: Criar uma Conta de Usuário Dedicada

Primeiro, crie uma conta de usuário dedicada para o runner do GitHub Actions:

# Criar a conta de usuário 'gh-runner'
sudo dscl . -create /Users/gh-runner
sudo dscl . -create /Users/gh-runner UserShell /bin/bash
sudo dscl . -create /Users/gh-runner RealName "GitHub runner"
sudo dscl . -create /Users/gh-runner UniqueID "1001"
sudo dscl . -create /Users/gh-runner PrimaryGroupID 20
sudo dscl . -create /Users/gh-runner NFSHomeDirectory /Users/gh-runner

# Definir a senha para o usuário
sudo dscl . -passwd /Users/gh-runner sua_senha

# Adicionar 'gh-runner' ao grupo 'admin'
sudo dscl . -append /Groups/admin GroupMembership gh-runner

Mude para a nova conta de usuário:

su gh-runner

Passo 2: Instalar Software Necessário

Instale Git e Rosetta 2 (se estiver usando Apple Silicon):

# Instalar Git se ainda não estiver instalado
brew install git

# Instalar Rosetta 2 para Macs Apple Silicon
softwareupdate --install-rosetta

Passo 3: Configurar o Runner do GitHub Actions

  1. Vá para seu repositório GitHub
  2. Navegue para Configurações > Actions > Runners

Runner do GitHub Actions

  1. Clique em "New self-hosted runner" (https://github.com/<username>/<repository>/settings/actions/runners/new)
  2. Selecione macOS como imagem do runner e ARM64 como arquitetura
  3. Siga os comandos fornecidos para baixar e configurar o runner

Runner do GitHub Actions

Crie um arquivo .env no diretório _work do runner:

# arquivo _work/.env
ImageOS=macos15
XCODE_15_DEVELOPER_DIR=/Applications/Xcode.app/Contents/Developer
  1. Execute o script run.sh em seu diretório do runner para completar a configuração.
  2. Verifique se o runner está ativo e ouvindo por trabalhos no terminal e verifique as configurações do repositório GitHub para a associação do runner e status Idle.

Runner do GitHub Actions

Passo 4: Configurar Sudoers (Opcional)

Se suas ações requerem privilégios de root, configure o arquivo sudoers:

sudo visudo

Adicione a seguinte linha:

gh-runner ALL=(ALL) NOPASSWD: ALL

Passo 5: Usar o Runner em Fluxos de Trabalho

Configure seu fluxo de trabalho do GitHub Actions para usar o runner auto-hospedado:

name: Fluxo de trabalho de exemplo

on:
  workflow_dispatch:

jobs:
  build:
    runs-on: [self-hosted, macOS, ARM64]
    steps:
      - name: Instalar NodeJS
        run: brew install node

O runner está autenticado em seu repositório e rotulado com self-hosted, macOS, e ARM64. Use-o em seus fluxos de trabalho especificando estes rótulos no campo runs-on:

runs-on: [self-hosted, macOS, ARM64]

Melhores Práticas

  • Mantenha seu software do runner atualizado
  • Monitore regularmente os logs do runner para problemas
  • Use rótulos específicos para diferentes tipos de runners
  • Implemente medidas de segurança adequadas
  • Considere usar múltiplos runners para balanceamento de carga

Solução de Problemas

Problemas comuns e soluções:

  1. Runner não conectando:

    • Verifique conectividade de rede
    • Verifique validade do token GitHub
    • Certifique-se de permissões adequadas
  2. Falhas de build:

    • Verifique instalação do Xcode
    • Verifique dependências necessárias
    • Revise logs do fluxo de trabalho
  3. Problemas de permissão:

    • Verifique permissões do usuário
    • Verifique configuração sudoers
    • Revise permissões do sistema de arquivos

Conclusão

Agora você tem um runner auto-hospedado do GitHub Actions configurado em seu Mac mini. Esta configuração fornece mais controle sobre seu ambiente CI/CD e permite executar fluxos de trabalho específicos do macOS de forma eficiente.

Lembre-se de manter regularmente seu runner e mantê-lo atualizado com os patches de segurança e versões de software mais recentes.

Aplicativo Nativo

Aplicativo nativo do Macfleet

Guia de Instalação do Macfleet

Macfleet é uma solução poderosa de gerenciamento de frota projetada especificamente para ambientes Mac Mini hospedados na nuvem. Como provedor de hospedagem na nuvem Mac Mini, você pode usar o Macfleet para monitorar, gerenciar e otimizar toda sua frota de instâncias Mac virtualizadas.

Este guia de instalação o conduzirá através da configuração do monitoramento do Macfleet em sistemas macOS, Windows e Linux para garantir supervisão abrangente de sua infraestrutura na nuvem.

🍎 macOS

  • Baixe o arquivo .dmg para Mac aqui
  • Clique duas vezes no arquivo .dmg baixado
  • Arraste o aplicativo Macfleet para a pasta Aplicativos
  • Ejete o arquivo .dmg
  • Abra Preferências do Sistema > Segurança e Privacidade
    • Aba Privacidade > Acessibilidade
    • Marque Macfleet para permitir monitoramento
  • Inicie o Macfleet a partir de Aplicativos
  • O rastreamento inicia automaticamente

🪟 Windows

  • Baixe o arquivo .exe para Windows aqui
  • Clique com o botão direito no arquivo .exe > "Executar como administrador"
  • Siga o assistente de instalação
  • Aceite os termos e condições
  • Permita no Windows Defender se solicitado
  • Conceda permissões de monitoramento de aplicativo
  • Inicie o Macfleet a partir do Menu Iniciar
  • O aplicativo começa o rastreamento automaticamente

🐧 Linux

  • Baixe o pacote .deb (Ubuntu/Debian) ou .rpm (CentOS/RHEL) aqui
  • Instale usando seu gerenciador de pacotes
    • Ubuntu/Debian: sudo dpkg -i Macfleet-linux.deb
    • CentOS/RHEL: sudo rpm -ivh Macfleet-linux.rpm
  • Permita permissões de acesso X11 se solicitado
  • Adicione o usuário aos grupos apropriados se necessário
  • Inicie o Macfleet a partir do menu Aplicativos
  • O aplicativo começa o rastreamento automaticamente

Nota: Após a instalação em todos os sistemas, faça login com suas credenciais do Macfleet para sincronizar dados com seu painel de controle.