Tutorial

Neue Updates und Verbesserungen zu Macfleet.

Wichtiger Hinweis

Die in diesen Tutorials bereitgestellten Codebeispiele und Skripte dienen nur zu Bildungszwecken. Macfleet ist nicht verantwortlich für Probleme, Schäden oder Sicherheitslücken, die durch die Verwendung, Änderung oder Implementierung dieser Beispiele entstehen können. Überprüfen und testen Sie Code immer in einer sicheren Umgebung, bevor Sie ihn in Produktionssystemen verwenden.

File Size and Storage Analysis on macOS

Master file size analysis and storage management on your MacFleet devices using powerful disk usage tools. This tutorial covers basic size queries, storage analysis, disk usage monitoring, and advanced storage optimization techniques for effective fleet storage management.

Understanding macOS Storage Analysis

macOS provides several command-line tools for storage analysis:

  • du - Disk usage analysis for files and directories
  • df - File system disk space usage
  • ls - File listing with size information
  • find - Search files by size criteria
  • stat - Detailed file information including size

Basic File and Folder Size Operations

Find Single File Size

#!/bin/bash

# Get size of a specific file
FILE_PATH="/Users/stefan/Desktop/Document.txt"

echo "File size for: $FILE_PATH"
du -h "$FILE_PATH"

echo "File size analysis completed"

Find Folder Size

#!/bin/bash

# Get size of a folder
FOLDER_PATH="/Users/stefan/Desktop/Wallpapers"

echo "Folder size for: $FOLDER_PATH"
du -h "$FOLDER_PATH"

echo "Folder size analysis completed"

Detailed Size Information

#!/bin/bash

# Get detailed size information with options
analyze_size() {
    local target_path="$1"
    
    if [[ ! -e "$target_path" ]]; then
        echo "Error: Path not found: $target_path"
        return 1
    fi
    
    echo "=== Size Analysis for: $target_path ==="
    
    # Basic size
    echo "Total size:"
    du -h "$target_path"
    
    # Size with all files listed
    echo -e "\nDetailed breakdown:"
    du -ah "$target_path" | head -20
    
    # Grand total with summary
    echo -e "\nSummary with total:"
    du -ch "$target_path"
}

# Usage
analyze_size "/Users/stefan/Desktop/Document.txt"

Advanced Storage Analysis

Directory Size Breakdown

#!/bin/bash

# Analyze directory sizes with sorting
directory_analysis() {
    local base_path="$1"
    local max_depth="${2:-1}"
    
    echo "=== Directory Size Analysis: $base_path ==="
    
    # Top-level directory sizes, sorted by size
    echo "Largest directories:"
    du -h -d "$max_depth" "$base_path" 2>/dev/null | sort -hr | head -20
    
    # Count of items
    echo -e "\nItem counts:"
    echo "Files: $(find "$base_path" -type f 2>/dev/null | wc -l)"
    echo "Directories: $(find "$base_path" -type d 2>/dev/null | wc -l)"
    
    # File type analysis
    echo -e "\nFile type distribution:"
    find "$base_path" -type f 2>/dev/null | grep '\.' | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr | head -10
}

# Usage
directory_analysis "/Users" 2

Large File Detection

#!/bin/bash

# Find large files in directory
find_large_files() {
    local search_path="$1"
    local min_size="${2:-100M}"
    
    echo "=== Large Files Analysis (>$min_size) in: $search_path ==="
    
    # Find large files
    echo "Large files found:"
    find "$search_path" -type f -size +"$min_size" -exec ls -lh {} \; 2>/dev/null | sort -k5 -hr
    
    # Count and total size
    local large_files_count=$(find "$search_path" -type f -size +"$min_size" 2>/dev/null | wc -l)
    echo -e "\nTotal large files: $large_files_count"
    
    if [[ $large_files_count -gt 0 ]]; then
        echo "Total size of large files:"
        find "$search_path" -type f -size +"$min_size" -exec du -ch {} + 2>/dev/null | tail -1
    fi
}

# Usage
find_large_files "/Users" "50M"

Storage Usage by File Type

#!/bin/bash

# Analyze storage usage by file extension
storage_by_type() {
    local search_path="$1"
    
    echo "=== Storage Usage by File Type: $search_path ==="
    
    # Create temporary file for analysis
    local temp_file="/tmp/file_analysis_$$"
    
    # Get all files with extensions and their sizes
    find "$search_path" -type f -name "*.*" -exec sh -c '
        for file; do
            ext=$(echo "$file" | rev | cut -d. -f1 | rev | tr "[:upper:]" "[:lower:]")
            size=$(stat -f%z "$file" 2>/dev/null || echo 0)
            echo "$ext $size"
        done
    ' _ {} + > "$temp_file"
    
    # Summarize by extension
    echo "Top file types by total size:"
    awk '{
        ext_size[$1] += $2
        ext_count[$1]++
    } END {
        for (ext in ext_size) {
            printf "%-10s %15s %10d files\n", ext, ext_size[ext], ext_count[ext]
        }
    }' "$temp_file" | sort -k2 -nr | head -15 | \
    while read ext size count; do
        # Convert bytes to human readable
        if [[ $size -gt 1073741824 ]]; then
            printf "%-10s %10.2f GB %10d files\n" "$ext" $(echo "scale=2; $size/1073741824" | bc) "$count"
        elif [[ $size -gt 1048576 ]]; then
            printf "%-10s %10.2f MB %10d files\n" "$ext" $(echo "scale=2; $size/1048576" | bc) "$count"
        else
            printf "%-10s %10.2f KB %10d files\n" "$ext" $(echo "scale=2; $size/1024" | bc) "$count"
        fi
    done
    
    # Cleanup
    rm -f "$temp_file"
}

# Usage
storage_by_type "/Users"

Disk Space Monitoring

System Disk Usage

#!/bin/bash

# Comprehensive disk usage report
system_disk_report() {
    echo "=== System Disk Usage Report ==="
    
    # File system usage
    echo "File System Usage:"
    df -h
    
    echo -e "\nDisk usage by mount point:"
    df -h | awk 'NR>1 {print $5 " " $1}' | while read output; do
        usage=$(echo $output | awk '{print $1}' | cut -d'%' -f1)
        partition=$(echo $output | awk '{print $2}')
        if [[ $usage -ge 80 ]]; then
            echo "⚠️  $partition: ${usage}% (WARNING)"
        elif [[ $usage -ge 90 ]]; then
            echo "🚨 $partition: ${usage}% (CRITICAL)"
        else
            echo "✅ $partition: ${usage}%"
        fi
    done
    
    # Top directories by size
    echo -e "\nTop directories by size:"
    du -h / 2>/dev/null | sort -hr | head -10
}

# Run system report
system_disk_report

Storage Trend Analysis

#!/bin/bash

# Monitor storage trends over time
storage_trend_monitor() {
    local target_path="$1"
    local log_file="/var/log/macfleet_storage_trends.log"
    
    echo "=== Storage Trend Monitoring: $target_path ==="
    
    # Current timestamp and size
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    local current_size=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    
    # Log current measurement
    echo "$timestamp,$target_path,$current_size" >> "$log_file"
    
    # Show recent trends if log exists
    if [[ -f "$log_file" ]]; then
        echo "Recent storage trends (last 10 measurements):"
        echo "Timestamp                | Path              | Size (KB)    | Change"
        echo "-------------------------|-------------------|--------------|--------"
        
        tail -10 "$log_file" | while IFS=',' read -r time path size; do
            # Calculate change from previous measurement
            printf "%-24s | %-17s | %12s | \n" "$time" "$(basename "$path")" "$size"
        done
        
        # Calculate growth rate
        local first_size=$(head -1 "$log_file" | cut -d',' -f3)
        local growth=$((current_size - first_size))
        local growth_percent=$(echo "scale=2; ($growth * 100) / $first_size" | bc 2>/dev/null || echo "0")
        
        echo -e "\nGrowth since first measurement: ${growth} KB (${growth_percent}%)"
    fi
}

# Usage
storage_trend_monitor "/Users"

Enterprise Storage Management System

#!/bin/bash

# MacFleet Storage Management Tool
# Comprehensive storage analysis and optimization for fleet devices

# Configuration
SCRIPT_VERSION="1.0.0"
LOG_FILE="/var/log/macfleet_storage.log"
REPORT_DIR="/etc/macfleet/reports/storage"
CONFIG_DIR="/etc/macfleet/storage"
ALERT_DIR="$CONFIG_DIR/alerts"

# Create directories if they don't exist
mkdir -p "$REPORT_DIR" "$CONFIG_DIR" "$ALERT_DIR"

# Storage categories for analysis
declare -A STORAGE_CATEGORIES=(
    ["system_critical"]="/System,/Library,/usr,/bin,/sbin"
    ["user_data"]="/Users,/home"
    ["applications"]="/Applications"
    ["caches"]="/Library/Caches,/Users/*/Library/Caches"
    ["logs"]="/var/log,/Library/Logs,/Users/*/Library/Logs"
    ["downloads"]="/Users/*/Downloads"
    ["documents"]="/Users/*/Documents,/Users/*/Desktop"
    ["media"]="/Users/*/Movies,/Users/*/Music,/Users/*/Pictures"
    ["temporary"]="/tmp,/var/tmp,/Users/*/Library/Application Support/*/Cache"
    ["development"]="/usr/local,/opt,/Users/*/Development"
)

# Storage thresholds and policies
declare -A STORAGE_POLICIES=(
    ["space_warning"]="80,85,90,95"
    ["large_file_threshold"]="100M,500M,1G,5G"
    ["old_file_threshold"]="30,90,180,365"
    ["cache_cleanup_age"]="7,14,30,60"
    ["temp_cleanup_age"]="1,3,7,14"
    ["log_retention_days"]="30,60,90,180"
)

# Logging function
log_action() {
    local message="$1"
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "[$timestamp] $message" | tee -a "$LOG_FILE"
}

# Advanced storage analysis engine
analyze_storage_comprehensive() {
    local target_path="$1"
    local analysis_level="${2:-standard}"
    local category="${3:-general}"
    
    log_action "Starting comprehensive storage analysis: $target_path (Level: $analysis_level)"
    
    if [[ ! -e "$target_path" ]]; then
        log_action "ERROR: Target path does not exist: $target_path"
        return 1
    fi
    
    local report_file="$REPORT_DIR/storage_analysis_$(echo "$target_path" | tr '/' '_')_$(date +%Y%m%d_%H%M%S).json"
    
    # Initialize report
    cat > "$report_file" << EOF
{
    "analysis_info": {
        "target_path": "$target_path",
        "analysis_level": "$analysis_level",
        "category": "$category",
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "script_version": "$SCRIPT_VERSION"
    },
    "storage_metrics": {},
    "file_analysis": {},
    "recommendations": [],
    "alerts": []
}
EOF
    
    # Basic storage metrics
    local total_size_kb=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    local total_files=$(find "$target_path" -type f 2>/dev/null | wc -l)
    local total_dirs=$(find "$target_path" -type d 2>/dev/null | wc -l)
    local largest_file_size=$(find "$target_path" -type f -exec stat -f%z {} \; 2>/dev/null | sort -nr | head -1)
    
    # Update report with basic metrics
    jq --argjson total_size_kb "$total_size_kb" \
       --argjson total_files "$total_files" \
       --argjson total_dirs "$((total_dirs - 1))" \
       --argjson largest_file "$largest_file_size" \
       '.storage_metrics = {
           "total_size_kb": $total_size_kb,
           "total_size_mb": ($total_size_kb / 1024),
           "total_size_gb": ($total_size_kb / 1048576),
           "total_files": $total_files,
           "total_directories": $total_dirs,
           "largest_file_bytes": $largest_file,
           "average_file_size": (if $total_files > 0 then ($total_size_kb * 1024 / $total_files) else 0 end)
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Perform analysis based on level
    case "$analysis_level" in
        "quick")
            perform_quick_storage_analysis "$target_path" "$report_file"
            ;;
        "standard")
            perform_standard_storage_analysis "$target_path" "$report_file"
            ;;
        "comprehensive")
            perform_comprehensive_storage_analysis "$target_path" "$report_file"
            ;;
        "optimization")
            perform_optimization_analysis "$target_path" "$report_file"
            ;;
        "security")
            perform_security_storage_analysis "$target_path" "$report_file"
            ;;
    esac
    
    log_action "Storage analysis completed: $report_file"
    echo "$report_file"
}

# Quick storage analysis
perform_quick_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing quick storage analysis..."
    
    # Top-level directory sizes
    local dir_sizes=$(du -h -d 1 "$target_path" 2>/dev/null | sort -hr | head -10)
    
    # Large files (>100MB)
    local large_files=$(find "$target_path" -type f -size +100M 2>/dev/null | head -10)
    
    # Update report
    jq --arg dir_sizes "$dir_sizes" \
       --arg large_files "$large_files" \
       '.file_analysis.quick = {
           "directory_sizes": $dir_sizes,
           "large_files_100mb": $large_files
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Standard storage analysis
perform_standard_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing standard storage analysis..."
    
    # File age analysis
    local recent_files=$(find "$target_path" -type f -mtime -7 2>/dev/null | wc -l)
    local old_files=$(find "$target_path" -type f -mtime +365 2>/dev/null | wc -l)
    
    # File size distribution
    local small_files=$(find "$target_path" -type f -size -1k 2>/dev/null | wc -l)
    local medium_files=$(find "$target_path" -type f -size +1k -size -1M 2>/dev/null | wc -l)
    local large_files=$(find "$target_path" -type f -size +1M -size -100M 2>/dev/null | wc -l)
    local huge_files=$(find "$target_path" -type f -size +100M 2>/dev/null | wc -l)
    
    # Empty files and directories
    local empty_files=$(find "$target_path" -type f -empty 2>/dev/null | wc -l)
    local empty_dirs=$(find "$target_path" -type d -empty 2>/dev/null | wc -l)
    
    # Update report
    jq --argjson recent_files "$recent_files" \
       --argjson old_files "$old_files" \
       --argjson small_files "$small_files" \
       --argjson medium_files "$medium_files" \
       --argjson large_files "$large_files" \
       --argjson huge_files "$huge_files" \
       --argjson empty_files "$empty_files" \
       --argjson empty_dirs "$empty_dirs" \
       '.file_analysis.standard = {
           "file_age": {
               "recent_files": $recent_files,
               "old_files": $old_files
           },
           "file_sizes": {
               "small_1kb": $small_files,
               "medium_1kb_1mb": $medium_files,
               "large_1mb_100mb": $large_files,
               "huge_100mb_plus": $huge_files
           },
           "empty_items": {
               "empty_files": $empty_files,
               "empty_directories": $empty_dirs
           }
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Comprehensive storage analysis
perform_comprehensive_storage_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing comprehensive storage analysis..."
    
    # Duplicate file detection
    local duplicates_temp="/tmp/duplicates_$$"
    find "$target_path" -type f -exec md5 {} \; 2>/dev/null | sort | uniq -d -w 32 > "$duplicates_temp"
    local duplicate_count=$(wc -l < "$duplicates_temp")
    
    # File extension analysis
    local extensions_temp="/tmp/extensions_$$"
    find "$target_path" -type f -name "*.*" 2>/dev/null | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr > "$extensions_temp"
    
    # Access pattern analysis
    local never_accessed=$(find "$target_path" -type f -atime +365 2>/dev/null | wc -l)
    local recently_accessed=$(find "$target_path" -type f -atime -1 2>/dev/null | wc -l)
    
    # Directory depth analysis
    local max_depth=$(find "$target_path" -type d 2>/dev/null | awk -F/ '{print NF}' | sort -n | tail -1)
    local current_depth=$(echo "$target_path" | tr -cd '/' | wc -c)
    local relative_depth=$((max_depth - current_depth))
    
    # Update report
    jq --argjson duplicate_count "$duplicate_count" \
       --argjson never_accessed "$never_accessed" \
       --argjson recently_accessed "$recently_accessed" \
       --argjson max_depth "$relative_depth" \
       '.file_analysis.comprehensive = {
           "duplicates": {
               "duplicate_file_sets": $duplicate_count
           },
           "access_patterns": {
               "never_accessed": $never_accessed,
               "recently_accessed": $recently_accessed
           },
           "structure": {
               "maximum_depth": $max_depth
           }
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Cleanup temp files
    rm -f "$duplicates_temp" "$extensions_temp"
}

# Optimization analysis
perform_optimization_analysis() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing optimization analysis..."
    
    # Cache files
    local cache_files=$(find "$target_path" -path "*/Cache*" -o -name "*cache*" 2>/dev/null | wc -l)
    local cache_size=$(find "$target_path" -path "*/Cache*" -o -name "*cache*" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Log files
    local log_files=$(find "$target_path" -name "*.log" 2>/dev/null | wc -l)
    local log_size=$(find "$target_path" -name "*.log" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Temporary files
    local temp_files=$(find "$target_path" -name "*.tmp" -o -name "*.temp" -o -path "*/tmp/*" 2>/dev/null | wc -l)
    local temp_size=$(find "$target_path" -name "*.tmp" -o -name "*.temp" -o -path "*/tmp/*" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Backup files
    local backup_files=$(find "$target_path" -name "*.bak" -o -name "*~" -o -name "*.backup" 2>/dev/null | wc -l)
    local backup_size=$(find "$target_path" -name "*.bak" -o -name "*~" -o -name "*.backup" -exec du -sk {} + 2>/dev/null | awk '{sum+=$1} END {print sum+0}')
    
    # Potential savings calculation
    local total_cleanup_kb=$((cache_size + log_size + temp_size + backup_size))
    local total_cleanup_mb=$((total_cleanup_kb / 1024))
    
    # Generate recommendations
    local recommendations=()
    [[ $cache_size -gt 102400 ]] && recommendations+=("Consider cleaning cache files (${cache_size} KB)")
    [[ $log_size -gt 51200 ]] && recommendations+=("Archive or clean old log files (${log_size} KB)")
    [[ $temp_size -gt 10240 ]] && recommendations+=("Remove temporary files (${temp_size} KB)")
    [[ $backup_size -gt 102400 ]] && recommendations+=("Review backup file retention (${backup_size} KB)")
    
    # Update report
    jq --argjson cache_files "$cache_files" \
       --argjson cache_size "$cache_size" \
       --argjson log_files "$log_files" \
       --argjson log_size "$log_size" \
       --argjson temp_files "$temp_files" \
       --argjson temp_size "$temp_size" \
       --argjson backup_files "$backup_files" \
       --argjson backup_size "$backup_size" \
       --argjson total_cleanup "$total_cleanup_kb" \
       --argjson recommendations "$(printf '%s\n' "${recommendations[@]}" | jq -R . | jq -s .)" \
       '.file_analysis.optimization = {
           "cache_files": {"count": $cache_files, "size_kb": $cache_size},
           "log_files": {"count": $log_files, "size_kb": $log_size},
           "temp_files": {"count": $temp_files, "size_kb": $temp_size},
           "backup_files": {"count": $backup_files, "size_kb": $backup_size},
           "potential_savings_kb": $total_cleanup,
           "potential_savings_mb": ($total_cleanup / 1024)
       } | .recommendations = $recommendations' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Fleet storage management
manage_fleet_storage() {
    local action="$1"
    local target_pattern="$2"
    local threshold="$3"
    
    log_action "Fleet storage management: $action"
    
    case "$action" in
        "audit")
            audit_fleet_storage "$target_pattern"
            ;;
        "cleanup")
            cleanup_fleet_storage "$target_pattern" "$threshold"
            ;;
        "monitor")
            monitor_fleet_storage
            ;;
        "report")
            generate_fleet_storage_report
            ;;
    esac
}

# Generate fleet storage report
generate_fleet_storage_report() {
    local fleet_report="$REPORT_DIR/fleet_storage_report_$(date +%Y%m%d_%H%M%S).json"
    
    echo "Generating fleet storage report..."
    
    # System overview
    local total_disk_space=$(df -k / | awk 'NR==2 {print $2}')
    local used_disk_space=$(df -k / | awk 'NR==2 {print $3}')
    local available_disk_space=$(df -k / | awk 'NR==2 {print $4}')
    local disk_usage_percent=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
    
    cat > "$fleet_report" << EOF
{
    "fleet_storage_report": {
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "system_overview": {
            "total_disk_kb": $total_disk_space,
            "used_disk_kb": $used_disk_space,
            "available_disk_kb": $available_disk_space,
            "usage_percent": $disk_usage_percent
        },
        "category_analysis": {},
        "alerts": [],
        "recommendations": []
    }
}
EOF
    
    # Analyze each storage category
    for category in "${!STORAGE_CATEGORIES[@]}"; do
        echo "Analyzing category: $category"
        IFS=',' read -ra PATHS <<< "${STORAGE_CATEGORIES[$category]}"
        local category_size=0
        local category_files=0
        
        for path in "${PATHS[@]}"; do
            if [[ -d "$path" ]]; then
                local path_size=$(du -sk "$path" 2>/dev/null | cut -f1)
                local path_files=$(find "$path" -type f 2>/dev/null | wc -l)
                category_size=$((category_size + path_size))
                category_files=$((category_files + path_files))
            fi
        done
        
        # Update report with category data
        jq --arg category "$category" \
           --argjson size "$category_size" \
           --argjson files "$category_files" \
           '.fleet_storage_report.category_analysis[$category] = {
               "size_kb": $size,
               "file_count": $files,
               "size_mb": ($size / 1024),
               "size_gb": ($size / 1048576)
           }' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
    done
    
    # Generate alerts based on thresholds
    local alerts=()
    if [[ $disk_usage_percent -gt 90 ]]; then
        alerts+=("CRITICAL: Disk usage above 90% ($disk_usage_percent%)")
    elif [[ $disk_usage_percent -gt 80 ]]; then
        alerts+=("WARNING: Disk usage above 80% ($disk_usage_percent%)")
    fi
    
    # Add alerts to report
    if [[ ${#alerts[@]} -gt 0 ]]; then
        jq --argjson alerts "$(printf '%s\n' "${alerts[@]}" | jq -R . | jq -s .)" \
           '.fleet_storage_report.alerts = $alerts' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
    fi
    
    log_action "Fleet storage report generated: $fleet_report"
    echo "$fleet_report"
}

# Main execution function
main() {
    local action="${1:-analyze}"
    local target="${2:-/Users}"
    local level="${3:-standard}"
    local options="${4:-}"
    
    log_action "=== MacFleet Storage Management Started ==="
    log_action "Action: $action, Target: $target, Level: $level"
    
    case "$action" in
        "analyze")
            analyze_storage_comprehensive "$target" "$level"
            ;;
        "size")
            echo "File/Folder size:"
            du -h "$target"
            ;;
        "fleet")
            manage_fleet_storage "$level" "$target" "$options"
            ;;
        "report")
            generate_fleet_storage_report
            ;;
        "help")
            echo "Usage: $0 [action] [target] [level] [options]"
            echo "Actions:"
            echo "  analyze <path> [level] - Comprehensive storage analysis"
            echo "  size <path> - Simple size check"
            echo "  fleet <action> [pattern] [threshold] - Fleet management"
            echo "  report - Generate fleet storage report"
            echo "  help - Show this help"
            echo ""
            echo "Analysis levels: quick, standard, comprehensive, optimization, security"
            echo "Fleet actions: audit, cleanup, monitor, report"
            ;;
        *)
            log_action "ERROR: Unknown action: $action"
            exit 1
            ;;
    esac
    
    log_action "=== Storage management completed ==="
}

# Execute main function
main "$@"

Common Storage Analysis Tasks

Directory Comparison

#!/bin/bash

# Compare sizes of multiple directories
compare_directories() {
    local directories=("$@")
    
    echo "=== Directory Size Comparison ==="
    printf "%-40s %15s %15s\n" "Directory" "Size" "Files"
    echo "--------------------------------------------------------------------------------"
    
    for dir in "${directories[@]}"; do
        if [[ -d "$dir" ]]; then
            local size=$(du -sh "$dir" 2>/dev/null | cut -f1)
            local files=$(find "$dir" -type f 2>/dev/null | wc -l)
            printf "%-40s %15s %15d\n" "$dir" "$size" "$files"
        else
            printf "%-40s %15s %15s\n" "$dir" "N/A" "N/A"
        fi
    done
}

# Usage
compare_directories "/Users/stefan/Documents" "/Users/stefan/Downloads" "/Users/stefan/Desktop"

Storage Growth Tracking

#!/bin/bash

# Track storage growth over time
track_storage_growth() {
    local target_path="$1"
    local log_file="/var/log/storage_growth.csv"
    
    # Create header if file doesn't exist
    if [[ ! -f "$log_file" ]]; then
        echo "timestamp,path,size_kb,files,directories" > "$log_file"
    fi
    
    # Get current metrics
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    local size_kb=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    local files=$(find "$target_path" -type f 2>/dev/null | wc -l)
    local dirs=$(find "$target_path" -type d 2>/dev/null | wc -l)
    
    # Log current measurement
    echo "$timestamp,$target_path,$size_kb,$files,$dirs" >> "$log_file"
    
    echo "Storage growth logged for: $target_path"
    echo "Current size: $(echo "scale=2; $size_kb/1024" | bc) MB"
    echo "Files: $files, Directories: $dirs"
}

# Usage
track_storage_growth "/Users/stefan"

Cleanup Recommendations

#!/bin/bash

# Generate cleanup recommendations
generate_cleanup_recommendations() {
    local target_path="$1"
    
    echo "=== Cleanup Recommendations for: $target_path ==="
    
    # Old downloads
    local old_downloads=$(find "$target_path" -path "*/Downloads/*" -type f -mtime +30 2>/dev/null | wc -l)
    if [[ $old_downloads -gt 0 ]]; then
        echo "📁 Found $old_downloads files in Downloads older than 30 days"
    fi
    
    # Large cache files
    local cache_size=$(find "$target_path" -path "*/Cache*" -type f -size +10M 2>/dev/null | wc -l)
    if [[ $cache_size -gt 0 ]]; then
        echo "🗂️  Found $cache_size large cache files (>10MB)"
    fi
    
    # Duplicate files
    local duplicates=$(find "$target_path" -type f -exec md5 {} \; 2>/dev/null | sort | uniq -d -w 32 | wc -l)
    if [[ $duplicates -gt 0 ]]; then
        echo "📄 Found approximately $duplicates duplicate files"
    fi
    
    # Empty directories
    local empty_dirs=$(find "$target_path" -type d -empty 2>/dev/null | wc -l)
    if [[ $empty_dirs -gt 0 ]]; then
        echo "📂 Found $empty_dirs empty directories"
    fi
    
    # Log files
    local large_logs=$(find "$target_path" -name "*.log" -size +50M 2>/dev/null | wc -l)
    if [[ $large_logs -gt 0 ]]; then
        echo "📋 Found $large_logs large log files (>50MB)"
    fi
    
    echo -e "\n💡 Consider running cleanup operations for these items"
}

# Usage
generate_cleanup_recommendations "/Users"

Important Notes

  • File paths with spaces - Use quotes or escape characters: du -h "/path/with spaces"
  • Permission requirements - Some directories require admin access
  • Performance impact - Large directory scans can be resource-intensive
  • Regular monitoring - Set up automated storage monitoring for fleet management
  • Backup before cleanup - Always backup important data before cleanup operations
  • Test scripts on sample directories before fleet-wide deployment

Tutorial

Neue Updates und Verbesserungen zu Macfleet.

Konfiguration eines GitHub Actions Runners auf einem Mac Mini (Apple Silicon)

GitHub Actions Runner

GitHub Actions ist eine leistungsstarke CI/CD-Plattform, die es Ihnen ermöglicht, Ihre Software-Entwicklungsworkflows zu automatisieren. Während GitHub gehostete Runner anbietet, bieten selbst-gehostete Runner erhöhte Kontrolle und Anpassung für Ihr CI/CD-Setup. Dieses Tutorial führt Sie durch die Einrichtung, Konfiguration und Verbindung eines selbst-gehosteten Runners auf einem Mac mini zur Ausführung von macOS-Pipelines.

Voraussetzungen

Bevor Sie beginnen, stellen Sie sicher, dass Sie haben:

  • Einen Mac mini (registrieren Sie sich bei Macfleet)
  • Ein GitHub-Repository mit Administratorrechten
  • Einen installierten Paketmanager (vorzugsweise Homebrew)
  • Git auf Ihrem System installiert

Schritt 1: Ein dediziertes Benutzerkonto erstellen

Erstellen Sie zunächst ein dediziertes Benutzerkonto für den GitHub Actions Runner:

# Das 'gh-runner' Benutzerkonto erstellen
sudo dscl . -create /Users/gh-runner
sudo dscl . -create /Users/gh-runner UserShell /bin/bash
sudo dscl . -create /Users/gh-runner RealName "GitHub runner"
sudo dscl . -create /Users/gh-runner UniqueID "1001"
sudo dscl . -create /Users/gh-runner PrimaryGroupID 20
sudo dscl . -create /Users/gh-runner NFSHomeDirectory /Users/gh-runner

# Das Passwort für den Benutzer setzen
sudo dscl . -passwd /Users/gh-runner ihr_passwort

# 'gh-runner' zur 'admin'-Gruppe hinzufügen
sudo dscl . -append /Groups/admin GroupMembership gh-runner

Wechseln Sie zum neuen Benutzerkonto:

su gh-runner

Schritt 2: Erforderliche Software installieren

Installieren Sie Git und Rosetta 2 (wenn Sie Apple Silicon verwenden):

# Git installieren, falls noch nicht installiert
brew install git

# Rosetta 2 für Apple Silicon Macs installieren
softwareupdate --install-rosetta

Schritt 3: Den GitHub Actions Runner konfigurieren

  1. Gehen Sie zu Ihrem GitHub-Repository
  2. Navigieren Sie zu Einstellungen > Actions > Runners

GitHub Actions Runner

  1. Klicken Sie auf "New self-hosted runner" (https://github.com/<username>/<repository>/settings/actions/runners/new)
  2. Wählen Sie macOS als Runner-Image und ARM64 als Architektur
  3. Folgen Sie den bereitgestellten Befehlen, um den Runner herunterzuladen und zu konfigurieren

GitHub Actions Runner

Erstellen Sie eine .env-Datei im _work-Verzeichnis des Runners:

# _work/.env Datei
ImageOS=macos15
XCODE_15_DEVELOPER_DIR=/Applications/Xcode.app/Contents/Developer
  1. Führen Sie das run.sh-Skript in Ihrem Runner-Verzeichnis aus, um die Einrichtung abzuschließen.
  2. Überprüfen Sie, dass der Runner aktiv ist und auf Jobs im Terminal wartet, und überprüfen Sie die GitHub-Repository-Einstellungen für die Runner-Zuordnung und den Idle-Status.

GitHub Actions Runner

Schritt 4: Sudoers konfigurieren (Optional)

Wenn Ihre Actions Root-Privilegien benötigen, konfigurieren Sie die sudoers-Datei:

sudo visudo

Fügen Sie die folgende Zeile hinzu:

gh-runner ALL=(ALL) NOPASSWD: ALL

Schritt 5: Den Runner in Workflows verwenden

Konfigurieren Sie Ihren GitHub Actions Workflow, um den selbst-gehosteten Runner zu verwenden:

name: Beispiel-Workflow

on:
  workflow_dispatch:

jobs:
  build:
    runs-on: [self-hosted, macOS, ARM64]
    steps:
      - name: NodeJS installieren
        run: brew install node

Der Runner ist bei Ihrem Repository authentifiziert und mit self-hosted, macOS und ARM64 markiert. Verwenden Sie ihn in Ihren Workflows, indem Sie diese Labels im runs-on-Feld angeben:

runs-on: [self-hosted, macOS, ARM64]

Best Practices

  • Halten Sie Ihre Runner-Software auf dem neuesten Stand
  • Überwachen Sie regelmäßig Runner-Logs auf Probleme
  • Verwenden Sie spezifische Labels für verschiedene Runner-Typen
  • Implementieren Sie angemessene Sicherheitsmaßnahmen
  • Erwägen Sie die Verwendung mehrerer Runner für Lastverteilung

Fehlerbehebung

Häufige Probleme und Lösungen:

  1. Runner verbindet sich nicht:

    • Überprüfen Sie die Netzwerkverbindung
    • Überprüfen Sie die Gültigkeit des GitHub-Tokens
    • Stellen Sie angemessene Berechtigungen sicher
  2. Build-Fehler:

    • Überprüfen Sie die Xcode-Installation
    • Überprüfen Sie erforderliche Abhängigkeiten
    • Überprüfen Sie Workflow-Logs
  3. Berechtigungsprobleme:

    • Überprüfen Sie Benutzerberechtigungen
    • Überprüfen Sie sudoers-Konfiguration
    • Überprüfen Sie Dateisystem-Berechtigungen

Fazit

Sie haben jetzt einen selbst-gehosteten GitHub Actions Runner auf Ihrem Mac mini konfiguriert. Diese Einrichtung bietet Ihnen mehr Kontrolle über Ihre CI/CD-Umgebung und ermöglicht es Ihnen, macOS-spezifische Workflows effizient auszuführen.

Denken Sie daran, Ihren Runner regelmäßig zu warten und ihn mit den neuesten Sicherheitspatches und Software-Versionen auf dem neuesten Stand zu halten.

Native App

Macfleet native App

Macfleet Installationsanleitung

Macfleet ist eine leistungsstarke Flottenmanagement-Lösung, die speziell für Cloud-gehostete Mac Mini-Umgebungen entwickelt wurde. Als Mac Mini Cloud-Hosting-Anbieter können Sie Macfleet verwenden, um Ihre gesamte Flotte virtualisierter Mac-Instanzen zu überwachen, zu verwalten und zu optimieren.

Diese Installationsanleitung führt Sie durch die Einrichtung der Macfleet-Überwachung auf macOS-, Windows- und Linux-Systemen, um eine umfassende Übersicht über Ihre Cloud-Infrastruktur zu gewährleisten.

🍎 macOS

  • Laden Sie die .dmg-Datei für Mac hier herunter
  • Doppelklicken Sie auf die heruntergeladene .dmg-Datei
  • Ziehen Sie die Macfleet-App in den Anwendungsordner
  • Werfen Sie die .dmg-Datei aus
  • Öffnen Sie Systemeinstellungen > Sicherheit & Datenschutz
    • Datenschutz-Tab > Bedienungshilfen
    • Aktivieren Sie Macfleet, um Überwachung zu erlauben
  • Starten Sie Macfleet aus den Anwendungen
  • Die Verfolgung startet automatisch

🪟 Windows

  • Laden Sie die .exe-Datei für Windows hier herunter
  • Rechtsklick auf die .exe-Datei > "Als Administrator ausführen"
  • Folgen Sie dem Installationsassistenten
  • Akzeptieren Sie die Allgemeinen Geschäftsbedingungen
  • Erlauben Sie in Windows Defender, wenn aufgefordert
  • Gewähren Sie Anwendungsüberwachungsberechtigungen
  • Starten Sie Macfleet aus dem Startmenü
  • Die Anwendung beginnt automatisch mit der Verfolgung

🐧 Linux

  • Laden Sie das .deb-Paket (Ubuntu/Debian) oder .rpm (CentOS/RHEL) hier herunter
  • Installieren Sie mit Ihrem Paketmanager
    • Ubuntu/Debian: sudo dpkg -i Macfleet-linux.deb
    • CentOS/RHEL: sudo rpm -ivh Macfleet-linux.rpm
  • Erlauben Sie X11-Zugriffsberechtigungen, wenn aufgefordert
  • Fügen Sie den Benutzer zu entsprechenden Gruppen hinzu, falls erforderlich
  • Starten Sie Macfleet aus dem Anwendungsmenü
  • Die Anwendung beginnt automatisch mit der Verfolgung

Hinweis: Nach der Installation auf allen Systemen melden Sie sich mit Ihren Macfleet-Anmeldedaten an, um Daten mit Ihrem Dashboard zu synchronisieren.