Tutorial

Nuevas actualizaciones y mejoras para Macfleet.

Aviso importante

Los ejemplos de código y scripts proporcionados en estos tutoriales son solo para propósitos educativos. Macfleet no es responsable de ningún problema, daño o vulnerabilidad de seguridad que pueda surgir del uso, modificación o implementación de estos ejemplos. Siempre revisa y prueba el código en un entorno seguro antes de usarlo en sistemas de producción.

File Size and Storage Analysis on macOS

Master file size analysis and storage management on your MacFleet devices using powerful disk usage tools. This tutorial covers basic size queries, storage analysis, disk usage monitoring, and advanced storage optimization techniques for effective fleet storage management.

Understanding macOS Storage Analysis

macOS provides several command-line tools for storage analysis:

  • du - Disk usage analysis for files and directories
  • df - File system disk space usage
  • ls - File listing with size information
  • find - Search files by size criteria
  • stat - Detailed file information including size

Basic File and Folder Size Operations

Find Single File Size

#!/bin/bash

# Get size of a specific file
FILE_PATH="/Users/stefan/Desktop/Document.txt"

echo "File size for: $FILE_PATH"
du -h "$FILE_PATH"

echo "File size analysis completed"

Find Folder Size

#!/bin/bash

# Get size of a folder
FOLDER_PATH="/Users/stefan/Desktop/Wallpapers"

echo "Folder size for: $FOLDER_PATH"
du -h "$FOLDER_PATH"

echo "Folder size analysis completed"

Detailed Size Information

#!/bin/bash

# Get detailed size information with options
analyze_size() {
    local target_path="$1"
    
    if [[ ! -e "$target_path" ]]; then
        echo "Error: Path not found: $target_path"
        return 1
    fi
    
    echo "=== Size Analysis for: $target_path ==="
    
    # Basic size
    echo "Total size:"
    du -h "$target_path"
    
    # Size with all files listed
    echo -e "\nDetailed breakdown:"
    du -ah "$target_path" | head -20
    
    # Grand total with summary
    echo -e "\nSummary with total:"
    du -ch "$target_path"
}

# Usage
analyze_size "/Users/stefan/Desktop/Document.txt"

Advanced Storage Analysis

Directory Size Breakdown

#!/bin/bash

# Analyze directory sizes with sorting
directory_analysis() {
    local base_path="$1"
    local max_depth="${2:-1}"
    
    echo "=== Directory Size Analysis: $base_path ==="
    
    # Top-level directory sizes, sorted by size
    echo "Largest directories:"
    du -h -d "$max_depth" "$base_path" 2>/dev/null | sort -hr | head -20
    
    # Count of items
    echo -e "\nItem counts:"
    echo "Files: $(find "$base_path" -type f 2>/dev/null | wc -l)"
    echo "Directories: $(find "$base_path" -type d 2>/dev/null | wc -l)"
    
    # File type analysis
    echo -e "\nFile type distribution:"
    find "$base_path" -type f 2>/dev/null | grep '\.' | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr | head -10
}

# Usage
directory_analysis "/Users" 2

Large File Detection

#!/bin/bash

# Find large files in directory
find_large_files() {
    local search_path="$1"
    local min_size="${2:-100M}"
    
    echo "=== Large Files Analysis (>$min_size) in: $search_path ==="
    
    # Find large files
    echo "Large files found:"
    find "$search_path" -type f -size +"$min_size" -exec ls -lh {} \; 2>/dev/null | sort -k5 -hr
    
    # Count and total size
    local large_files_count=$(find "$search_path" -type f -size +"$min_size" 2>/dev/null | wc -l)
    echo -e "\nTotal large files: $large_files_count"
    
    if [[ $large_files_count -gt 0 ]]; then
        echo "Total size of large files:"
        find "$search_path" -type f -size +"$min_size" -exec du -ch {} + 2>/dev/null | tail -1
    fi
}

# Usage
find_large_files "/Users" "50M"

Storage Usage by File Type

#!/bin/bash

# Analyze storage usage by file extension
storage_by_type() {
    local search_path="$1"
    
    echo "=== Storage Usage by File Type: $search_path ==="
    
    # Create temporary file for analysis
    local temp_file="/tmp/file_analysis_$$"
    
    # Get all files with extensions and their sizes
    find "$search_path" -type f -name "*.*" -exec sh -c '
        for file; do
            ext=$(echo "$file" | rev | cut -d. -f1 | rev | tr "[:upper:]" "[:lower:]")
            size=$(stat -f%z "$file" 2>/dev/null || echo 0)
            echo "$ext $size"
        done
    ' _ {} + > "$temp_file"
    
    # Summarize by extension
    echo "Top file types by total size:"
    awk '{
        ext_size[$1] += $2
        ext_count[$1]++
    } END {
        for (ext in ext_size) {
            printf "%-10s %15s %10d files\n", ext, ext_size[ext], ext_count[ext]
        }
    }' "$temp_file" | sort -k2 -nr | head -15 | \
    while read ext size count; do
        # Convert bytes to human readable
        if [[ $size -gt 1073741824 ]]; then
            printf "%-10s %10.2f GB %10d files\n" "$ext" $(echo "scale=2; $size/1073741824" | bc) "$count"
        elif [[ $size -gt 1048576 ]]; then
            printf "%-10s %10.2f MB %10d files\n" "$ext" $(echo "scale=2; $size/1048576" | bc) "$count"
        else
            printf "%-10s %10.2f KB %10d files\n" "$ext" $(echo "scale=2; $size/1024" | bc) "$count"
        fi
    done
    
    # Cleanup
    rm -f "$temp_file"
}

# Usage
storage_by_type "/Users"

Disk Space Monitoring

System Disk Usage

#!/bin/bash

# Comprehensive disk usage report
system_disk_report() {
    echo "=== System Disk Usage Report ==="
    
    # File system usage
    echo "File System Usage:"
    df -h
    
    echo -e "\nDisk usage by mount point:"
    df -h | awk 'NR>1 {print $5 " " $1}' | while read output; do
        usage=$(echo $output | awk '{print $1}' | cut -d'%' -f1)
        partition=$(echo $output | awk '{print $2}')
        if [[ $usage -ge 80 ]]; then
            echo "⚠️  $partition: ${usage}% (WARNING)"
        elif [[ $usage -ge 90 ]]; then
            echo "🚨 $partition: ${usage}% (CRITICAL)"
        else
            echo "✅ $partition: ${usage}%"
        fi
    done
    
    # Top directories by size
    echo -e "\nTop directories by size:"
    du -h / 2>/dev/null | sort -hr | head -10
}

# Run system report
system_disk_report

Storage Trend Analysis

#!/bin/bash

# Monitor storage trends over time
storage_trend_monitor() {
    local target_path="$1"
    local log_file="/var/log/macfleet_storage_trends.log"
    
    echo "=== Storage Trend Monitoring: $target_path ==="
    
    # Current timestamp and size
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    local current_size=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    
    # Log current measurement
    echo "$timestamp,$target_path,$current_size" >> "$log_file"
    
    # Show recent trends if log exists
    if [[ -f "$log_file" ]]; then
        echo "Recent storage trends (last 10 measurements):"
        echo "Timestamp                | Path              | Size (KB)    | Change"
        echo "-------------------------|-------------------|--------------|--------"
        
        tail -10 "$log_file" | while IFS=',' read -r time path size; do
            # Calculate change from previous measurement
            printf "%-24s | %-17s | %12s | \n" "$time" "$(basename "$path")" "$size"
        done
        
        # Calculate growth rate
        local first_size=$(head -1 "$log_file" | cut -d',' -f3)
        local growth=$((current_size - first_size))
        local growth_percent=$(echo "scale=2; ($growth * 100) / $first_size" | bc 2>/dev/null || echo "0")
        
        echo -e "\nGrowth since first measurement: ${growth} KB (${growth_percent}%)"
    fi
}

# Usage
storage_trend_monitor "/Users"

Enterprise Storage Management System

#!/bin/bash

# MacFleet Storage Management Tool
# Comprehensive storage analysis and optimization for fleet devices

# Configuration
SCRIPT_VERSION="1.0.0"
LOG_FILE="/var/log/macfleet_storage.log"
REPORT_DIR="/etc/macfleet/reports/storage"
CONFIG_DIR="/etc/macfleet/storage"
ALERT_DIR="$CONFIG_DIR/alerts"

# Create directories if they don't exist
mkdir -p "$REPORT_DIR" "$CONFIG_DIR" "$ALERT_DIR"

# Storage categories for analysis
declare -A STORAGE_CATEGORIES=(
    ["system_critical"]="/System,/Library,/usr,/bin,/sbin"
    ["user_data"]="/Users,/home"
    ["applications"]="/Applications"
    ["caches"]="/Library/Caches,/Users/*/Library/Caches"
    ["logs"]="/var/log,/Library/Logs,/Users/*/Library/Logs"
    ["downloads"]="/Users/*/Downloads"
    ["documents"]="/Users/*/Documents,/Users/*/Desktop"
    ["media"]="/Users/*/Movies,/Users/*/Music,/Users/*/Pictures"
    ["temporary"]="/tmp,/var/tmp,/Users/*/Library/Application Support/*/Cache"
    ["development"]="/usr/local,/opt,/Users/*/Development"
)

# Storage thresholds and policies
declare -A STORAGE_POLICIES=(
    ["space_warning"]="80,85,90,95"
    ["large_file_threshold"]="100M,500M,1G,5G"
    ["old_file_threshold"]="30,90,180,365"
    ["cache_cleanup_age"]="7,14,30,60"
    ["temp_cleanup_age"]="1,3,7,14"
    ["log_retention_days"]="30,60,90,180"
)

# Logging function
log_action() {
    local message="$1"
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "[$timestamp] $message" | tee -a "$LOG_FILE"
}

# Advanced storage analysis engine
analyze_storage_comprehensive() {
    local target_path="$1"
    local analysis_level="${2:-standard}"
    local category="${3:-general}"
    
    log_action "Starting comprehensive storage analysis: $target_path (Level: $analysis_level)"
    
    if [[ ! -e "$target_path" ]]; then
        log_action "ERROR: Target path does not exist: $target_path"
        return 1
    fi
    
    local report_file="$REPORT_DIR/storage_analysis_$(echo "$target_path" | tr '/' '_')_$(date +%Y%m%d_%H%M%S).json"
    
    # Initialize report
    cat > "$report_file" << EOF
{
    "analysis_info": {
        "target_path": "$target_path",
        "analysis_level": "$analysis_level",
        "category": "$category",
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "script_version": "$SCRIPT_VERSION"
    },
    "storage_metrics": {},
    "file_analysis": {},
    "recommendations": [],
    "alerts": []
}
EOF
    
    # Basic storage metrics
    local total_size_kb=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    local total_files=$(find "$target_path" -type f 2>/dev/null | wc -l)
    local total_dirs=$(find "$target_path" -type d 2>/dev/null | wc -l)
    local largest_file_size=$(find "$target_path" -type f -exec stat -f%z {} \; 2>/dev/null | sort -nr | head -1)
    
    # Update report with basic metrics
    jq --argjson total_size_kb "$total_size_kb" \
       --argjson total_files "$total_files" \
       --argjson total_dirs "$((total_dirs - 1))" \
       --argjson largest_file "$largest_file_size" \
       '.storage_metrics = {
           "total_size_kb": $total_size_kb,
           "total_size_mb": ($total_size_kb / 1024),
           "total_size_gb": ($total_size_kb / 1048576),
           "total_files": $total_files,
           "total_directories": $total_dirs,
           "largest_file_bytes": $largest_file,
           "average_file_size": (if $total_files > 0 then ($total_size_kb * 1024 / $total_files) else 0 end)
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Perform analysis based on level
    case "$analysis_level" in
        "quick")
            perform_quick_storage_analysis "$target_path" "$report_file"
            ;;
        "standard")
            perform_standard_storage_analysis "$target_path" "$report_file"
            ;;
        "comprehensive")
            perform_comprehensive_storage_analysis "$target_path" "$report_file"
            ;;
        "optimization")
            perform_optimization_analysis "$target_path" "$report_file"
            ;;
        "security")
            perform_security_storage_analysis "$target_path" "$report_file"
            ;;
    esac
    
    log_action "Storage analysis completed: $report_file"
    echo "$report_file"
}

# Quick storage analysis
perform_quick_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing quick storage analysis..."
    
    # Top-level directory sizes
    local dir_sizes=$(du -h -d 1 "$target_path" 2>/dev/null | sort -hr | head -10)
    
    # Large files (>100MB)
    local large_files=$(find "$target_path" -type f -size +100M 2>/dev/null | head -10)
    
    # Update report
    jq --arg dir_sizes "$dir_sizes" \
       --arg large_files "$large_files" \
       '.file_analysis.quick = {
           "directory_sizes": $dir_sizes,
           "large_files_100mb": $large_files
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Standard storage analysis
perform_standard_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing standard storage analysis..."
    
    # File age analysis
    local recent_files=$(find "$target_path" -type f -mtime -7 2>/dev/null | wc -l)
    local old_files=$(find "$target_path" -type f -mtime +365 2>/dev/null | wc -l)
    
    # File size distribution
    local small_files=$(find "$target_path" -type f -size -1k 2>/dev/null | wc -l)
    local medium_files=$(find "$target_path" -type f -size +1k -size -1M 2>/dev/null | wc -l)
    local large_files=$(find "$target_path" -type f -size +1M -size -100M 2>/dev/null | wc -l)
    local huge_files=$(find "$target_path" -type f -size +100M 2>/dev/null | wc -l)
    
    # Empty files and directories
    local empty_files=$(find "$target_path" -type f -empty 2>/dev/null | wc -l)
    local empty_dirs=$(find "$target_path" -type d -empty 2>/dev/null | wc -l)
    
    # Update report
    jq --argjson recent_files "$recent_files" \
       --argjson old_files "$old_files" \
       --argjson small_files "$small_files" \
       --argjson medium_files "$medium_files" \
       --argjson large_files "$large_files" \
       --argjson huge_files "$huge_files" \
       --argjson empty_files "$empty_files" \
       --argjson empty_dirs "$empty_dirs" \
       '.file_analysis.standard = {
           "file_age": {
               "recent_files": $recent_files,
               "old_files": $old_files
           },
           "file_sizes": {
               "small_1kb": $small_files,
               "medium_1kb_1mb": $medium_files,
               "large_1mb_100mb": $large_files,
               "huge_100mb_plus": $huge_files
           },
           "empty_items": {
               "empty_files": $empty_files,
               "empty_directories": $empty_dirs
           }
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Comprehensive storage analysis
perform_comprehensive_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing comprehensive storage analysis..."
    
    # Duplicate file detection
    local duplicates_temp="/tmp/duplicates_$$"
    find "$target_path" -type f -exec md5 {} \; 2>/dev/null | sort | uniq -d -w 32 > "$duplicates_temp"
    local duplicate_count=$(wc -l < "$duplicates_temp")
    
    # File extension analysis
    local extensions_temp="/tmp/extensions_$$"
    find "$target_path" -type f -name "*.*" 2>/dev/null | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr > "$extensions_temp"
    
    # Access pattern analysis
    local never_accessed=$(find "$target_path" -type f -atime +365 2>/dev/null | wc -l)
    local recently_accessed=$(find "$target_path" -type f -atime -1 2>/dev/null | wc -l)
    
    # Directory depth analysis
    local max_depth=$(find "$target_path" -type d 2>/dev/null | awk -F/ '{print NF}' | sort -n | tail -1)
    local current_depth=$(echo "$target_path" | tr -cd '/' | wc -c)
    local relative_depth=$((max_depth - current_depth))
    
    # Update report
    jq --argjson duplicate_count "$duplicate_count" \
       --argjson never_accessed "$never_accessed" \
       --argjson recently_accessed "$recently_accessed" \
       --argjson max_depth "$relative_depth" \
       '.file_analysis.comprehensive = {
           "duplicates": {
               "duplicate_file_sets": $duplicate_count
           },
           "access_patterns": {
               "never_accessed": $never_accessed,
               "recently_accessed": $recently_accessed
           },
           "structure": {
               "maximum_depth": $max_depth
           }
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Cleanup temp files
    rm -f "$duplicates_temp" "$extensions_temp"
}

# Optimization analysis
perform_optimization_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing optimization analysis..."
    
    # Cache files
    local cache_files=$(find "$target_path" -path "*/Cache*" -o -name "*cache*" 2>/dev/null | wc -l)
    local cache_size=$(find "$target_path" -path "*/Cache*" -o -name "*cache*" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Log files
    local log_files=$(find "$target_path" -name "*.log" 2>/dev/null | wc -l)
    local log_size=$(find "$target_path" -name "*.log" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Temporary files
    local temp_files=$(find "$target_path" -name "*.tmp" -o -name "*.temp" -o -path "*/tmp/*" 2>/dev/null | wc -l)
    local temp_size=$(find "$target_path" -name "*.tmp" -o -name "*.temp" -o -path "*/tmp/*" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Backup files
    local backup_files=$(find "$target_path" -name "*.bak" -o -name "*~" -o -name "*.backup" 2>/dev/null | wc -l)
    local backup_size=$(find "$target_path" -name "*.bak" -o -name "*~" -o -name "*.backup" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Potential savings calculation
    local total_cleanup_kb=$((cache_size + log_size + temp_size + backup_size))
    local total_cleanup_mb=$((total_cleanup_kb / 1024))
    
    # Generate recommendations
    local recommendations=()
    [[ $cache_size -gt 102400 ]] && recommendations+=("Consider cleaning cache files (${cache_size} KB)")
    [[ $log_size -gt 51200 ]] && recommendations+=("Archive or clean old log files (${log_size} KB)")
    [[ $temp_size -gt 10240 ]] && recommendations+=("Remove temporary files (${temp_size} KB)")
    [[ $backup_size -gt 102400 ]] && recommendations+=("Review backup file retention (${backup_size} KB)")
    
    # Update report
    jq --argjson cache_files "$cache_files" \
       --argjson cache_size "$cache_size" \
       --argjson log_files "$log_files" \
       --argjson log_size "$log_size" \
       --argjson temp_files "$temp_files" \
       --argjson temp_size "$temp_size" \
       --argjson backup_files "$backup_files" \
       --argjson backup_size "$backup_size" \
       --argjson total_cleanup "$total_cleanup_kb" \
       --argjson recommendations "$(printf '%s\n' "${recommendations[@]}" | jq -R . | jq -s .)" \
       '.file_analysis.optimization = {
           "cache_files": {"count": $cache_files, "size_kb": $cache_size},
           "log_files": {"count": $log_files, "size_kb": $log_size},
           "temp_files": {"count": $temp_files, "size_kb": $temp_size},
           "backup_files": {"count": $backup_files, "size_kb": $backup_size},
           "potential_savings_kb": $total_cleanup,
           "potential_savings_mb": ($total_cleanup / 1024)
       } | .recommendations = $recommendations' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Fleet storage management
manage_fleet_storage() {
    local action="$1"
    local target_pattern="$2"
    local threshold="$3"
    
    log_action "Fleet storage management: $action"
    
    case "$action" in
        "audit")
            audit_fleet_storage "$target_pattern"
            ;;
        "cleanup")
            cleanup_fleet_storage "$target_pattern" "$threshold"
            ;;
        "monitor")
            monitor_fleet_storage
            ;;
        "report")
            generate_fleet_storage_report
            ;;
    esac
}

# Generate fleet storage report
generate_fleet_storage_report() {
    local fleet_report="$REPORT_DIR/fleet_storage_report_$(date +%Y%m%d_%H%M%S).json"
    
    echo "Generating fleet storage report..."
    
    # System overview
    local total_disk_space=$(df -k / | awk 'NR==2 {print $2}')
    local used_disk_space=$(df -k / | awk 'NR==2 {print $3}')
    local available_disk_space=$(df -k / | awk 'NR==2 {print $4}')
    local disk_usage_percent=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
    
    cat > "$fleet_report" << EOF
{
    "fleet_storage_report": {
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "system_overview": {
            "total_disk_kb": $total_disk_space,
            "used_disk_kb": $used_disk_space,
            "available_disk_kb": $available_disk_space,
            "usage_percent": $disk_usage_percent
        },
        "category_analysis": {},
        "alerts": [],
        "recommendations": []
    }
}
EOF
    
    # Analyze each storage category
    for category in "${!STORAGE_CATEGORIES[@]}"; do
        echo "Analyzing category: $category"
        IFS=',' read -ra PATHS <<< "${STORAGE_CATEGORIES[$category]}"
        local category_size=0
        local category_files=0
        
        for path in "${PATHS[@]}"; do
            if [[ -d "$path" ]]; then
                local path_size=$(du -sk "$path" 2>/dev/null | cut -f1)
                local path_files=$(find "$path" -type f 2>/dev/null | wc -l)
                category_size=$((category_size + path_size))
                category_files=$((category_files + path_files))
            fi
        done
        
        # Update report with category data
        jq --arg category "$category" \
           --argjson size "$category_size" \
           --argjson files "$category_files" \
           '.fleet_storage_report.category_analysis[$category] = {
               "size_kb": $size,
               "file_count": $files,
               "size_mb": ($size / 1024),
               "size_gb": ($size / 1048576)
           }' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
    done
    
    # Generate alerts based on thresholds
    local alerts=()
    if [[ $disk_usage_percent -gt 90 ]]; then
        alerts+=("CRITICAL: Disk usage above 90% ($disk_usage_percent%)")
    elif [[ $disk_usage_percent -gt 80 ]]; then
        alerts+=("WARNING: Disk usage above 80% ($disk_usage_percent%)")
    fi
    
    # Add alerts to report
    if [[ ${#alerts[@]} -gt 0 ]]; then
        jq --argjson alerts "$(printf '%s\n' "${alerts[@]}" | jq -R . | jq -s .)" \
           '.fleet_storage_report.alerts = $alerts' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
    fi
    
    log_action "Fleet storage report generated: $fleet_report"
    echo "$fleet_report"
}

# Main execution function
main() {
    local action="${1:-analyze}"
    local target="${2:-/Users}"
    local level="${3:-standard}"
    local options="${4:-}"
    
    log_action "=== MacFleet Storage Management Started ==="
    log_action "Action: $action, Target: $target, Level: $level"
    
    case "$action" in
        "analyze")
            analyze_storage_comprehensive "$target" "$level"
            ;;
        "size")
            echo "File/Folder size:"
            du -h "$target"
            ;;
        "fleet")
            manage_fleet_storage "$level" "$target" "$options"
            ;;
        "report")
            generate_fleet_storage_report
            ;;
        "help")
            echo "Usage: $0 [action] [target] [level] [options]"
            echo "Actions:"
            echo "  analyze <path> [level] - Comprehensive storage analysis"
            echo "  size <path> - Simple size check"
            echo "  fleet <action> [pattern] [threshold] - Fleet management"
            echo "  report - Generate fleet storage report"
            echo "  help - Show this help"
            echo ""
            echo "Analysis levels: quick, standard, comprehensive, optimization, security"
            echo "Fleet actions: audit, cleanup, monitor, report"
            ;;
        *)
            log_action "ERROR: Unknown action: $action"
            exit 1
            ;;
    esac
    
    log_action "=== Storage management completed ==="
}

# Execute main function
main "$@"

Common Storage Analysis Tasks

Directory Comparison

#!/bin/bash

# Compare sizes of multiple directories
compare_directories() {
    local directories=("$@")
    
    echo "=== Directory Size Comparison ==="
    printf "%-40s %15s %15s\n" "Directory" "Size" "Files"
    echo "--------------------------------------------------------------------------------"
    
    for dir in "${directories[@]}"; do
        if [[ -d "$dir" ]]; then
            local size=$(du -sh "$dir" 2>/dev/null | cut -f1)
            local files=$(find "$dir" -type f 2>/dev/null | wc -l)
            printf "%-40s %15s %15d\n" "$dir" "$size" "$files"
        else
            printf "%-40s %15s %15s\n" "$dir" "N/A" "N/A"
        fi
    done
}

# Usage
compare_directories "/Users/stefan/Documents" "/Users/stefan/Downloads" "/Users/stefan/Desktop"

Storage Growth Tracking

#!/bin/bash

# Track storage growth over time
track_storage_growth() {
    local target_path="$1"
    local log_file="/var/log/storage_growth.csv"
    
    # Create header if file doesn't exist
    if [[ ! -f "$log_file" ]]; then
        echo "timestamp,path,size_kb,files,directories" > "$log_file"
    fi
    
    # Get current metrics
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    local size_kb=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    local files=$(find "$target_path" -type f 2>/dev/null | wc -l)
    local dirs=$(find "$target_path" -type d 2>/dev/null | wc -l)
    
    # Log current measurement
    echo "$timestamp,$target_path,$size_kb,$files,$dirs" >> "$log_file"
    
    echo "Storage growth logged for: $target_path"
    echo "Current size: $(echo "scale=2; $size_kb/1024" | bc) MB"
    echo "Files: $files, Directories: $dirs"
}

# Usage
track_storage_growth "/Users/stefan"

Cleanup Recommendations

#!/bin/bash

# Generate cleanup recommendations
generate_cleanup_recommendations() {
    local target_path="$1"
    
    echo "=== Cleanup Recommendations for: $target_path ==="
    
    # Old downloads
    local old_downloads=$(find "$target_path" -path "*/Downloads/*" -type f -mtime +30 2>/dev/null | wc -l)
    if [[ $old_downloads -gt 0 ]]; then
        echo "📁 Found $old_downloads files in Downloads older than 30 days"
    fi
    
    # Large cache files
    local cache_size=$(find "$target_path" -path "*/Cache*" -type f -size +10M 2>/dev/null | wc -l)
    if [[ $cache_size -gt 0 ]]; then
        echo "🗂️  Found $cache_size large cache files (>10MB)"
    fi
    
    # Duplicate files
    local duplicates=$(find "$target_path" -type f -exec md5 {} \; 2>/dev/null | sort | uniq -d -w 32 | wc -l)
    if [[ $duplicates -gt 0 ]]; then
        echo "📄 Found approximately $duplicates duplicate files"
    fi
    
    # Empty directories
    local empty_dirs=$(find "$target_path" -type d -empty 2>/dev/null | wc -l)
    if [[ $empty_dirs -gt 0 ]]; then
        echo "📂 Found $empty_dirs empty directories"
    fi
    
    # Log files
    local large_logs=$(find "$target_path" -name "*.log" -size +50M 2>/dev/null | wc -l)
    if [[ $large_logs -gt 0 ]]; then
        echo "📋 Found $large_logs large log files (>50MB)"
    fi
    
    echo -e "\n💡 Consider running cleanup operations for these items"
}

# Usage
generate_cleanup_recommendations "/Users"

Important Notes

  • File paths with spaces - Use quotes or escape characters: du -h "/path/with spaces"
  • Permission requirements - Some directories require admin access
  • Performance impact - Large directory scans can be resource-intensive
  • Regular monitoring - Set up automated storage monitoring for fleet management
  • Backup before cleanup - Always backup important data before cleanup operations
  • Test scripts on sample directories before fleet-wide deployment

Tutorial

Nuevas actualizaciones y mejoras para Macfleet.

Configurando un Runner de GitHub Actions en un Mac Mini (Apple Silicon)

Runner de GitHub Actions

GitHub Actions es una plataforma poderosa de CI/CD que te permite automatizar tus flujos de trabajo de desarrollo de software. Aunque GitHub ofrece runners hospedados, los runners auto-hospedados proporcionan mayor control y personalización para tu configuración de CI/CD. Este tutorial te guía a través de la configuración y conexión de un runner auto-hospedado en un Mac mini para ejecutar pipelines de macOS.

Prerrequisitos

Antes de comenzar, asegúrate de tener:

  • Un Mac mini (regístrate en Macfleet)
  • Un repositorio de GitHub con derechos de administrador
  • Un gestor de paquetes instalado (preferiblemente Homebrew)
  • Git instalado en tu sistema

Paso 1: Crear una Cuenta de Usuario Dedicada

Primero, crea una cuenta de usuario dedicada para el runner de GitHub Actions:

# Crear la cuenta de usuario 'gh-runner'
sudo dscl . -create /Users/gh-runner
sudo dscl . -create /Users/gh-runner UserShell /bin/bash
sudo dscl . -create /Users/gh-runner RealName "GitHub runner"
sudo dscl . -create /Users/gh-runner UniqueID "1001"
sudo dscl . -create /Users/gh-runner PrimaryGroupID 20
sudo dscl . -create /Users/gh-runner NFSHomeDirectory /Users/gh-runner

# Establecer la contraseña para el usuario
sudo dscl . -passwd /Users/gh-runner tu_contraseña

# Agregar 'gh-runner' al grupo 'admin'
sudo dscl . -append /Groups/admin GroupMembership gh-runner

Cambia a la nueva cuenta de usuario:

su gh-runner

Paso 2: Instalar Software Requerido

Instala Git y Rosetta 2 (si usas Apple Silicon):

# Instalar Git si no está ya instalado
brew install git

# Instalar Rosetta 2 para Macs Apple Silicon
softwareupdate --install-rosetta

Paso 3: Configurar el Runner de GitHub Actions

  1. Ve a tu repositorio de GitHub
  2. Navega a Configuración > Actions > Runners

Runner de GitHub Actions

  1. Haz clic en "New self-hosted runner" (https://github.com/<username>/<repository>/settings/actions/runners/new)
  2. Selecciona macOS como imagen del runner y ARM64 como arquitectura
  3. Sigue los comandos proporcionados para descargar y configurar el runner

Runner de GitHub Actions

Crea un archivo .env en el directorio _work del runner:

# archivo _work/.env
ImageOS=macos15
XCODE_15_DEVELOPER_DIR=/Applications/Xcode.app/Contents/Developer
  1. Ejecuta el script run.sh en tu directorio del runner para completar la configuración.
  2. Verifica que el runner esté activo y escuchando trabajos en la terminal y revisa la configuración del repositorio de GitHub para la asociación del runner y el estado Idle.

Runner de GitHub Actions

Paso 4: Configurar Sudoers (Opcional)

Si tus acciones requieren privilegios de root, configura el archivo sudoers:

sudo visudo

Agrega la siguiente línea:

gh-runner ALL=(ALL) NOPASSWD: ALL

Paso 5: Usar el Runner en Flujos de Trabajo

Configura tu flujo de trabajo de GitHub Actions para usar el runner auto-hospedado:

name: Flujo de trabajo de muestra

on:
  workflow_dispatch:

jobs:
  build:
    runs-on: [self-hosted, macOS, ARM64]
    steps:
      - name: Instalar NodeJS
        run: brew install node

El runner está autenticado en tu repositorio y etiquetado con self-hosted, macOS, y ARM64. Úsalo en tus flujos de trabajo especificando estas etiquetas en el campo runs-on:

runs-on: [self-hosted, macOS, ARM64]

Mejores Prácticas

  • Mantén tu software del runner actualizado
  • Monitorea regularmente los logs del runner para problemas
  • Usa etiquetas específicas para diferentes tipos de runners
  • Implementa medidas de seguridad apropiadas
  • Considera usar múltiples runners para balanceo de carga

Solución de Problemas

Problemas comunes y soluciones:

  1. Runner no conectando:

    • Verifica conectividad de red
    • Verifica validez del token de GitHub
    • Asegúrate de permisos apropiados
  2. Fallas de construcción:

    • Verifica instalación de Xcode
    • Verifica dependencias requeridas
    • Revisa logs del flujo de trabajo
  3. Problemas de permisos:

    • Verifica permisos de usuario
    • Verifica configuración de sudoers
    • Revisa permisos del sistema de archivos

Conclusión

Ahora tienes un runner auto-hospedado de GitHub Actions configurado en tu Mac mini. Esta configuración te proporciona más control sobre tu entorno de CI/CD y te permite ejecutar flujos de trabajo específicos de macOS de manera eficiente.

Recuerda mantener regularmente tu runner y mantenerlo actualizado con los últimos parches de seguridad y versiones de software.

Aplicación Nativa

Aplicación nativa de Macfleet

Guía de Instalación de Macfleet

Macfleet es una solución poderosa de gestión de flota diseñada específicamente para entornos de Mac Mini alojados en la nube. Como proveedor de hosting en la nube de Mac Mini, puedes usar Macfleet para monitorear, gestionar y optimizar toda tu flota de instancias Mac virtualizadas.

Esta guía de instalación te llevará a través de la configuración del monitoreo de Macfleet en sistemas macOS, Windows y Linux para asegurar una supervisión integral de tu infraestructura en la nube.

🍎 macOS

  • Descarga el archivo .dmg para Mac aquí
  • Haz doble clic en el archivo .dmg descargado
  • Arrastra la aplicación Macfleet a la carpeta Aplicaciones
  • Expulsa el archivo .dmg
  • Abre Preferencias del Sistema > Seguridad y Privacidad
    • Pestaña Privacidad > Accesibilidad
    • Marca Macfleet para permitir el monitoreo
  • Inicia Macfleet desde Aplicaciones
  • El seguimiento comienza automáticamente

🪟 Windows

  • Descarga el archivo .exe para Windows aquí
  • Haz clic derecho en el archivo .exe > "Ejecutar como administrador"
  • Sigue el asistente de instalación
  • Acepta los términos y condiciones
  • Permite en Windows Defender si se solicita
  • Concede permisos de monitoreo de aplicaciones
  • Inicia Macfleet desde el Menú Inicio
  • La aplicación comienza el seguimiento automáticamente

🐧 Linux

  • Descarga el paquete .deb (Ubuntu/Debian) o .rpm (CentOS/RHEL) aquí
  • Instala usando tu gestor de paquetes
    • Ubuntu/Debian: sudo dpkg -i Macfleet-linux.deb
    • CentOS/RHEL: sudo rpm -ivh Macfleet-linux.rpm
  • Permite permisos de acceso X11 si se solicita
  • Agrega el usuario a los grupos apropiados si es necesario
  • Inicia Macfleet desde el menú de Aplicaciones
  • La aplicación comienza el seguimiento automáticamente

Nota: Después de la instalación en todos los sistemas, inicia sesión con tus credenciales de Macfleet para sincronizar datos con tu panel de control.