Tutorial

Neue Updates und Verbesserungen zu Macfleet.

Wichtiger Hinweis

Die in diesen Tutorials bereitgestellten Codebeispiele und Skripte dienen nur zu Bildungszwecken. Macfleet ist nicht verantwortlich für Probleme, Schäden oder Sicherheitslücken, die durch die Verwendung, Änderung oder Implementierung dieser Beispiele entstehen können. Überprüfen und testen Sie Code immer in einer sicheren Umgebung, bevor Sie ihn in Produktionssystemen verwenden.

Disk Utility Management on macOS

Manage storage and disk operations across your MacFleet devices using advanced diskutil commands and enterprise storage management tools. This tutorial covers disk management, storage security, compliance monitoring, and enterprise storage lifecycle management.

Understanding macOS Disk Management

macOS provides several command-line tools for disk and storage management:

  • diskutil - Primary disk utility for volume and partition management
  • df - Display filesystem disk space usage
  • du - Display directory space usage
  • fsck - Filesystem check and repair utility
  • APFS - Apple File System with advanced features

Basic Disk Operations

List All Disks

#!/bin/bash

# Basic disk listing
diskutil list

echo "Disk inventory completed successfully"

Enhanced Disk Information

#!/bin/bash

# Enhanced disk information with detailed analysis
get_comprehensive_disk_info() {
    echo "=== Comprehensive Disk Analysis ==="
    
    # Basic disk list
    echo "Disk Inventory:"
    diskutil list
    
    echo -e "\nDisk Usage Summary:"
    df -h
    
    echo -e "\nAPFS Container Information:"
    diskutil apfs list
    
    echo -e "\nPhysical Disk Information:"
    system_profiler SPStorageDataType
    
    echo -e "\nSMART Status:"
    for disk in $(diskutil list | grep "^/dev/disk[0-9]" | awk '{print $1}'); do
        echo "SMART status for $disk:"
        smartctl -H "$disk" 2>/dev/null || echo "  SMART data not available"
    done
}

# Execute comprehensive analysis
get_comprehensive_disk_info

Storage Management Categories

Storage Type Classifications

#!/bin/bash

# Enterprise storage categories for management and policy enforcement
declare -A STORAGE_CATEGORIES=(
    ["system_critical"]="System volumes, boot partitions, recovery partitions"
    ["user_data"]="User home directories, personal documents, application data"
    ["application_storage"]="Application binaries, frameworks, system libraries"
    ["development_workspace"]="Development environments, source code, build artifacts"
    ["media_content"]="Videos, images, audio files, creative assets"
    ["backup_archives"]="Time Machine, system backups, archive storage"
    ["temporary_cache"]="Cache files, temporary data, system logs"
    ["security_volumes"]="Encrypted volumes, secure containers, key storage"
    ["network_storage"]="Mounted network drives, cloud storage, remote volumes"
    ["external_devices"]="USB drives, external HDDs, removable storage"
)

# Storage risk levels
declare -A RISK_LEVELS=(
    ["system_critical"]="critical"
    ["user_data"]="high"
    ["application_storage"]="medium"
    ["development_workspace"]="medium"
    ["media_content"]="low"
    ["backup_archives"]="high"
    ["temporary_cache"]="low"
    ["security_volumes"]="critical"
    ["network_storage"]="medium"
    ["external_devices"]="high"
)

# Management priorities
declare -A MANAGEMENT_PRIORITIES=(
    ["system_critical"]="highest_protection"
    ["user_data"]="high_protection"
    ["application_storage"]="standard_protection"
    ["development_workspace"]="standard_protection"
    ["media_content"]="basic_protection"
    ["backup_archives"]="highest_protection"
    ["temporary_cache"]="cleanup_priority"
    ["security_volumes"]="highest_protection"
    ["network_storage"]="monitoring_priority"
    ["external_devices"]="security_screening"
)

print_storage_categories() {
    echo "=== Storage Management Categories ==="
    for category in "${!STORAGE_CATEGORIES[@]}"; do
        echo "Category: $category"
        echo "  Description: ${STORAGE_CATEGORIES[$category]}"
        echo "  Risk Level: ${RISK_LEVELS[$category]}"
        echo "  Management Priority: ${MANAGEMENT_PRIORITIES[$category]}"
        echo ""
    done
}

# Display available categories
print_storage_categories

Disk Management Policies

Storage Policy Engine

#!/bin/bash

# Storage management policies for different organizational requirements
declare -A STORAGE_POLICIES=(
    ["enterprise_standard"]="Standard enterprise storage with backup and monitoring"
    ["high_security_financial"]="High-security storage for financial and sensitive data"
    ["creative_workflow"]="Optimized for creative teams with large media files"
    ["development_environment"]="Development-focused with version control and build optimization"
    ["compliance_strict"]="Strict compliance with data retention and audit requirements"
    ["performance_optimized"]="Performance-focused with SSD optimization and caching"
    ["cost_efficient"]="Cost-efficient storage with intelligent tiering and cleanup"
)

# Policy configurations
get_storage_policy() {
    local policy_type="$1"
    
    case "$policy_type" in
        "enterprise_standard")
            cat << EOF
{
    "encryption_required": true,
    "backup_frequency": "daily",
    "retention_period": "90_days",
    "compression_enabled": true,
    "deduplication": true,
    "monitoring_level": "standard",
    "security_scanning": true,
    "automatic_cleanup": false,
    "performance_optimization": "balanced",
    "compliance_frameworks": ["iso27001"],
    "disk_health_monitoring": true,
    "storage_quotas": {
        "user_data": "100GB",
        "application_storage": "50GB",
        "temporary_cache": "10GB"
    }
}
EOF
            ;;
        "high_security_financial")
            cat << EOF
{
    "encryption_required": true,
    "encryption_level": "aes256",
    "backup_frequency": "continuous",
    "retention_period": "2555_days",
    "compression_enabled": false,
    "deduplication": false,
    "monitoring_level": "comprehensive",
    "security_scanning": true,
    "automatic_cleanup": false,
    "performance_optimization": "security_first",
    "compliance_frameworks": ["sox", "pci_dss", "nist"],
    "disk_health_monitoring": true,
    "immutable_backups": true,
    "access_logging": "detailed",
    "integrity_verification": "continuous",
    "storage_quotas": {
        "user_data": "50GB",
        "application_storage": "25GB",
        "temporary_cache": "5GB",
        "audit_logs": "unlimited"
    }
}
EOF
            ;;
        "creative_workflow")
            cat << EOF
{
    "encryption_required": false,
    "backup_frequency": "project_based",
    "retention_period": "365_days",
    "compression_enabled": false,
    "deduplication": false,
    "monitoring_level": "performance",
    "security_scanning": false,
    "automatic_cleanup": true,
    "performance_optimization": "speed_first",
    "compliance_frameworks": ["client_confidentiality"],
    "disk_health_monitoring": true,
    "large_file_optimization": true,
    "cache_acceleration": true,
    "storage_quotas": {
        "user_data": "500GB",
        "media_content": "2TB",
        "project_archives": "1TB",
        "temporary_cache": "100GB"
    }
}
EOF
            ;;
        *)
            echo "Unknown storage policy: $policy_type"
            return 1
            ;;
    esac
}

# Apply storage policy
apply_storage_policy() {
    local policy="$1"
    local config_file="/tmp/storage_policy.json"
    
    echo "Applying storage policy: $policy"
    
    get_storage_policy "$policy" > "$config_file"
    
    if [[ ! -f "$config_file" ]]; then
        echo "❌ Failed to generate policy configuration"
        return 1
    fi
    
    echo "✅ Storage policy applied successfully"
    echo "Configuration: $config_file"
    
    # Display key policy settings
    echo "=== Policy Summary ==="
    echo "Encryption Required: $(jq -r '.encryption_required' "$config_file")"
    echo "Backup Frequency: $(jq -r '.backup_frequency' "$config_file")"
    echo "Retention Period: $(jq -r '.retention_period' "$config_file")"
    echo "Monitoring Level: $(jq -r '.monitoring_level' "$config_file")"
    echo "Performance Optimization: $(jq -r '.performance_optimization' "$config_file")"
    
    return 0
}

Advanced Disk Operations

Secure Disk Erasure

#!/bin/bash

# Enhanced secure disk erasure with verification
secure_erase_volume() {
    local identifier="$1"
    local security_level="$2"
    local new_name="$3"
    local filesystem_type="${4:-APFS}"
    
    echo "=== Secure Volume Erasure ==="
    echo "Target: $identifier"
    echo "Security Level: $security_level"
    echo "New Name: $new_name"
    echo "Filesystem: $filesystem_type"
    
    # Verify disk exists
    if ! diskutil info "$identifier" &>/dev/null; then
        echo "❌ Error: Disk identifier '$identifier' not found"
        return 1
    fi
    
    # Get disk information before erasure
    local disk_info
    disk_info=$(diskutil info "$identifier")
    echo "Disk Information:"
    echo "$disk_info"
    
    # Confirmation prompt
    echo ""
    echo "⚠️  WARNING: This will permanently erase all data on $identifier"
    echo "Current volume: $(echo "$disk_info" | grep "Volume Name:" | awk -F': ' '{print $2}')"
    echo "Size: $(echo "$disk_info" | grep "Disk Size:" | awk -F': ' '{print $2}')"
    
    read -p "Type 'CONFIRM' to proceed with erasure: " confirmation
    if [[ "$confirmation" != "CONFIRM" ]]; then
        echo "❌ Operation cancelled by user"
        return 1
    fi
    
    # Perform secure erasure based on security level
    case "$security_level" in
        "basic")
            echo "Performing basic erasure..."
            diskutil eraseVolume "$filesystem_type" "$new_name" "$identifier"
            ;;
        "secure")
            echo "Performing secure erasure (DoD 5220.22-M)..."
            diskutil secureErase freespace 3 "$identifier"
            diskutil eraseVolume "$filesystem_type" "$new_name" "$identifier"
            ;;
        "military")
            echo "Performing military-grade erasure (7-pass)..."
            diskutil secureErase freespace 4 "$identifier"
            diskutil eraseVolume "$filesystem_type" "$new_name" "$identifier"
            ;;
        *)
            echo "❌ Unknown security level: $security_level"
            return 1
            ;;
    esac
    
    # Verify erasure completion
    if diskutil info "$identifier" | grep -q "Volume Name.*$new_name"; then
        echo "✅ Secure erasure completed successfully"
        echo "New volume: $new_name"
        echo "Filesystem: $filesystem_type"
        
        # Log the operation
        audit_log "Secure erasure completed: $identifier -> $new_name (Security: $security_level)"
        
        return 0
    else
        echo "❌ Erasure verification failed"
        return 1
    fi
}

# Usage example
# secure_erase_volume "disk2s1" "secure" "CleanVolume" "APFS"

Volume Health Monitoring

#!/bin/bash

# Comprehensive volume health monitoring and assessment
monitor_volume_health() {
    local identifier="$1"
    local health_report="/tmp/volume_health_$(date +%Y%m%d_%H%M%S).json"
    
    echo "=== Volume Health Monitoring ==="
    echo "Target Volume: $identifier"
    
    # Initialize health report
    cat > "$health_report" << EOF
{
    "volume_identifier": "$identifier",
    "scan_timestamp": "$(date -Iseconds)",
    "hostname": "$(hostname)",
    "health_checks": {}
}
EOF
    
    # Volume verification
    echo "Running volume verification..."
    local verify_output
    verify_output=$(diskutil verifyVolume "$identifier" 2>&1)
    local verify_status=$?
    
    # Parse verification results
    local verification_passed=false
    if [[ $verify_status -eq 0 ]] && echo "$verify_output" | grep -q "appears to be OK"; then
        verification_passed=true
    fi
    
    # SMART status check (if supported)
    echo "Checking SMART status..."
    local smart_status="unknown"
    local physical_disk
    physical_disk=$(diskutil info "$identifier" | grep "Part of Whole" | awk '{print $4}')
    
    if [[ -n "$physical_disk" ]]; then
        if smartctl -H "$physical_disk" 2>/dev/null | grep -q "PASSED"; then
            smart_status="passed"
        elif smartctl -H "$physical_disk" 2>/dev/null | grep -q "FAILED"; then
            smart_status="failed"
        fi
    fi
    
    # Disk usage analysis
    echo "Analyzing disk usage..."
    local disk_usage
    disk_usage=$(df -h "$identifier" 2>/dev/null | tail -1)
    local usage_percent
    usage_percent=$(echo "$disk_usage" | awk '{print $5}' | tr -d '%')
    
    # Performance test
    echo "Running performance test..."
    local write_speed read_speed
    local test_file="/Volumes/$(diskutil info "$identifier" | grep "Mount Point" | awk -F': ' '{print $2}' | xargs)/speed_test_$$"
    
    if [[ -w "$(dirname "$test_file")" ]]; then
        # Write test (10MB)
        write_speed=$(dd if=/dev/zero of="$test_file" bs=1m count=10 2>&1 | grep "bytes/sec" | awk '{print $(NF-1)}')
        
        # Read test
        read_speed=$(dd if="$test_file" of=/dev/null bs=1m 2>&1 | grep "bytes/sec" | awk '{print $(NF-1)}')
        
        # Cleanup
        rm -f "$test_file"
    else
        write_speed="not_available"
        read_speed="not_available"
    fi
    
    # Update health report
    jq --argjson verification "$verification_passed" \
       --arg smart_status "$smart_status" \
       --argjson usage_percent "${usage_percent:-0}" \
       --arg write_speed "$write_speed" \
       --arg read_speed "$read_speed" \
       '.health_checks = {
          "filesystem_verification": $verification,
          "smart_status": $smart_status,
          "disk_usage_percent": $usage_percent,
          "write_speed_mbps": $write_speed,
          "read_speed_mbps": $read_speed
        }' "$health_report" > "${health_report}.tmp" && mv "${health_report}.tmp" "$health_report"
    
    # Health assessment
    local health_score=100
    local issues=()
    
    if [[ "$verification_passed" != "true" ]]; then
        ((health_score -= 30))
        issues+=("filesystem_corruption")
    fi
    
    if [[ "$smart_status" == "failed" ]]; then
        ((health_score -= 50))
        issues+=("smart_failure")
    fi
    
    if [[ ${usage_percent:-0} -gt 90 ]]; then
        ((health_score -= 20))
        issues+=("disk_space_critical")
    elif [[ ${usage_percent:-0} -gt 80 ]]; then
        ((health_score -= 10))
        issues+=("disk_space_warning")
    fi
    
    # Add health score and issues to report
    jq --argjson health_score "$health_score" \
       --argjson issues "$(printf '%s\n' "${issues[@]}" | jq -R . | jq -s .)" \
       '.health_assessment = {
          "overall_score": $health_score,
          "issues_detected": $issues,
          "recommendations": []
        }' "$health_report" > "${health_report}.tmp" && mv "${health_report}.tmp" "$health_report"
    
    # Display results
    echo ""
    echo "Health Assessment Results:"
    echo "  Overall Health Score: $health_score/100"
    echo "  Filesystem Verification: $([ "$verification_passed" = "true" ] && echo "✅ PASSED" || echo "❌ FAILED")"
    echo "  SMART Status: $smart_status"
    echo "  Disk Usage: ${usage_percent:-0}%"
    echo "  Write Speed: $write_speed"
    echo "  Read Speed: $read_speed"
    
    if [[ ${#issues[@]} -gt 0 ]]; then
        echo "  Issues Detected:"
        for issue in "${issues[@]}"; do
            echo "    - $issue"
        done
    else
        echo "  ✅ No issues detected"
    fi
    
    echo "  Health Report: $health_report"
    
    # Log health check
    audit_log "Volume health check completed: $identifier (Score: $health_score/100)"
    
    return 0
}

Enterprise Storage Management System

#!/bin/bash

# MacFleet Enterprise Storage Management System
# Comprehensive disk and storage management

# Configuration
CONFIG_DIR="/etc/macfleet/storage"
LOG_FILE="/var/log/macfleet_storage_management.log"
DATA_DIR="/var/data/macfleet/storage"
REPORTS_DIR="/var/reports/macfleet/storage"
AUDIT_LOG="/var/log/macfleet_storage_audit.log"
BACKUP_DIR="/var/backups/macfleet/storage"

# Create required directories
create_directories() {
    local directories=("$CONFIG_DIR" "$DATA_DIR" "$REPORTS_DIR" "$BACKUP_DIR")
    
    for dir in "${directories[@]}"; do
        if [[ ! -d "$dir" ]]; then
            sudo mkdir -p "$dir"
            sudo chmod 755 "$dir"
        fi
    done
}

# Logging functions
log_action() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') [INFO] $1" | tee -a "$LOG_FILE"
}

log_error() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') [ERROR] $1" | tee -a "$LOG_FILE" >&2
}

audit_log() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') [AUDIT] $1" | tee -a "$AUDIT_LOG"
}

# Storage inventory management
initialize_storage_database() {
    local db_file="$DATA_DIR/storage_inventory.json"
    
    if [[ ! -f "$db_file" ]]; then
        cat > "$db_file" << EOF
{
    "version": "1.0",
    "created": "$(date -Iseconds)",
    "storage_devices": {},
    "volume_configurations": {},
    "health_monitoring": {},
    "performance_metrics": {},
    "security_assessments": {}
}
EOF
        log_action "Storage database initialized: $db_file"
    fi
}

# Comprehensive storage discovery
discover_storage_infrastructure() {
    local discovery_file="$DATA_DIR/storage_discovery_$(date +%Y%m%d_%H%M%S).json"
    
    log_action "Starting comprehensive storage discovery"
    
    echo "=== Storage Infrastructure Discovery ==="
    
    # Physical storage devices
    echo "Discovering physical storage devices..."
    local physical_devices
    physical_devices=$(system_profiler SPStorageDataType -json)
    
    # Logical volumes and partitions
    echo "Analyzing logical volumes..."
    local logical_volumes
    logical_volumes=$(diskutil list -plist | plutil -convert json -o -)
    
    # APFS containers and volumes
    echo "Examining APFS containers..."
    local apfs_containers
    apfs_containers=$(diskutil apfs list -plist | plutil -convert json -o -)
    
    # Network mounted volumes
    echo "Checking network volumes..."
    local network_volumes
    network_volumes=$(mount | grep -E "(nfs|smb|afp)" | jq -R . | jq -s .)
    
    # External devices
    echo "Detecting external devices..."
    local external_devices
    external_devices=$(diskutil list external -plist | plutil -convert json -o -)
    
    # Encryption status
    echo "Assessing encryption status..."
    local encryption_status=()
    while IFS= read -r volume; do
        if [[ -n "$volume" ]]; then
            local encrypted="false"
            if diskutil info "$volume" | grep -q "FileVault.*Yes"; then
                encrypted="true"
            fi
            encryption_status+=("{\"volume\": \"$volume\", \"encrypted\": $encrypted}")
        fi
    done <<< "$(diskutil list | grep -E "^\s+[0-9]+:" | awk '{print $NF}')"
    
    # Generate comprehensive discovery report
    cat > "$discovery_file" << EOF
{
    "discovery_timestamp": "$(date -Iseconds)",
    "hostname": "$(hostname)",
    "physical_storage": $physical_devices,
    "logical_volumes": $logical_volumes,
    "apfs_containers": $apfs_containers,
    "network_volumes": $network_volumes,
    "external_devices": $external_devices,
    "encryption_status": [$(IFS=,; echo "${encryption_status[*]}")],
    "system_info": {
        "os_version": "$(sw_vers -productVersion)",
        "total_storage": "$(df -h / | tail -1 | awk '{print $2}')",
        "available_storage": "$(df -h / | tail -1 | awk '{print $4}')"
    }
}
EOF
    
    log_action "Storage discovery completed: $discovery_file"
    
    # Display summary
    echo ""
    echo "Discovery Summary:"
    echo "  Physical Devices: $(echo "$physical_devices" | jq '.SPStorageDataType | length' 2>/dev/null || echo "0")"
    echo "  Logical Volumes: $(echo "$logical_volumes" | jq '.AllDisks | length' 2>/dev/null || echo "0")"
    echo "  APFS Containers: $(echo "$apfs_containers" | jq '.Containers | length' 2>/dev/null || echo "0")"
    echo "  Network Volumes: $(echo "$network_volumes" | jq '. | length' 2>/dev/null || echo "0")"
    echo "  External Devices: $(echo "$external_devices" | jq '.AllDisks | length' 2>/dev/null || echo "0")"
    echo "  Discovery Report: $discovery_file"
    
    return 0
}

# Automated storage optimization
optimize_storage_performance() {
    local optimization_profile="$1"
    local optimization_report="$REPORTS_DIR/storage_optimization_$(date +%Y%m%d_%H%M%S).json"
    
    log_action "Starting storage optimization (Profile: $optimization_profile)"
    
    echo "=== Storage Performance Optimization ==="
    echo "Optimization Profile: $optimization_profile"
    
    # Initialize optimization report
    cat > "$optimization_report" << EOF
{
    "optimization_profile": "$optimization_profile",
    "start_timestamp": "$(date -Iseconds)",
    "hostname": "$(hostname)",
    "optimizations_performed": [],
    "performance_improvements": {},
    "recommendations": []
}
EOF
    
    local optimizations_performed=()
    
    case "$optimization_profile" in
        "performance")
            echo "Applying performance optimizations..."
            
            # Enable TRIM for SSDs
            if sudo trimforce enable 2>/dev/null; then
                optimizations_performed+=("trim_enabled")
                echo "  ✅ TRIM enabled for SSD optimization"
            fi
            
            # Optimize APFS
            for container in $(diskutil apfs list | grep "Container" | awk '{print $2}'); do
                if diskutil apfs defragment "$container" 2>/dev/null; then
                    optimizations_performed+=("apfs_defragment_$container")
                    echo "  ✅ APFS container $container defragmented"
                fi
            done
            
            # Clear system caches
            sudo purge
            optimizations_performed+=("system_cache_cleared")
            echo "  ✅ System caches cleared"
            ;;
            
        "security")
            echo "Applying security optimizations..."
            
            # Enable FileVault if not already enabled
            if ! fdesetup status | grep -q "FileVault is On"; then
                echo "  ⚠️  FileVault not enabled - recommend enabling for security"
                optimizations_performed+=("filevault_recommendation")
            else
                echo "  ✅ FileVault already enabled"
            fi
            
            # Secure delete free space
            for volume in $(df | grep "^/dev" | awk '{print $1}'); do
                if diskutil secureErase freespace 1 "$volume" 2>/dev/null; then
                    optimizations_performed+=("secure_delete_$volume")
                    echo "  ✅ Secure delete performed on $volume"
                fi
            done
            ;;
            
        "storage_cleanup")
            echo "Performing storage cleanup..."
            
            # Clean system logs
            local log_space_before
            log_space_before=$(du -sh /var/log 2>/dev/null | awk '{print $1}')
            
            sudo log erase --all 2>/dev/null
            optimizations_performed+=("system_logs_cleaned")
            
            # Clean user caches
            local cache_space_before
            cache_space_before=$(du -sh ~/Library/Caches 2>/dev/null | awk '{print $1}')
            
            find ~/Library/Caches -type f -atime +30 -delete 2>/dev/null
            optimizations_performed+=("user_caches_cleaned")
            
            # Clean downloads folder
            find ~/Downloads -type f -atime +90 -delete 2>/dev/null
            optimizations_performed+=("downloads_cleaned")
            
            echo "  ✅ System cleanup completed"
            ;;
            
        *)
            echo "❌ Unknown optimization profile: $optimization_profile"
            return 1
            ;;
    esac
    
    # Update optimization report
    jq --argjson optimizations "$(printf '%s\n' "${optimizations_performed[@]}" | jq -R . | jq -s .)" \
       '.optimizations_performed = $optimizations | .end_timestamp = "'"$(date -Iseconds)"'"' \
       "$optimization_report" > "${optimization_report}.tmp" && mv "${optimization_report}.tmp" "$optimization_report"
    
    echo ""
    echo "Optimization Results:"
    echo "  Profile: $optimization_profile"
    echo "  Optimizations Performed: ${#optimizations_performed[@]}"
    echo "  Report: $optimization_report"
    
    audit_log "Storage optimization completed: $optimization_profile (${#optimizations_performed[@]} optimizations)"
    
    return 0
}

# Bulk volume operations
bulk_volume_operations() {
    local operation="$1"
    local volume_list="$2"
    local operation_report="$REPORTS_DIR/bulk_operations_$(date +%Y%m%d_%H%M%S).json"
    
    log_action "Starting bulk volume operations: $operation"
    
    if [[ ! -f "$volume_list" ]]; then
        log_error "Volume list file not found: $volume_list"
        return 1
    fi
    
    echo "=== Bulk Volume Operations ==="
    echo "Operation: $operation"
    echo "Volume List: $volume_list"
    
    # Initialize operation report
    cat > "$operation_report" << EOF
{
    "operation_type": "$operation",
    "start_timestamp": "$(date -Iseconds)",
    "hostname": "$(hostname)",
    "total_volumes": 0,
    "successful_operations": 0,
    "failed_operations": 0,
    "operation_details": []
}
EOF
    
    local total_volumes=0
    local successful_operations=0
    local failed_operations=0
    
    # Process each volume in the list
    while IFS= read -r volume_entry; do
        if [[ -n "$volume_entry" && ! "$volume_entry" =~ ^# ]]; then
            ((total_volumes++))
            
            # Parse volume entry (format: identifier|name|filesystem)
            IFS='|' read -r identifier name filesystem <<< "$volume_entry"
            
            echo "Processing volume: $identifier"
            
            local operation_success=false
            local operation_message=""
            
            case "$operation" in
                "verify")
                    if diskutil verifyVolume "$identifier" &>/dev/null; then
                        operation_success=true
                        operation_message="Volume verification passed"
                        ((successful_operations++))
                    else
                        operation_message="Volume verification failed"
                        ((failed_operations++))
                    fi
                    ;;
                    
                "repair")
                    if diskutil repairVolume "$identifier" &>/dev/null; then
                        operation_success=true
                        operation_message="Volume repair completed"
                        ((successful_operations++))
                    else
                        operation_message="Volume repair failed"
                        ((failed_operations++))
                    fi
                    ;;
                    
                "mount")
                    if diskutil mount "$identifier" &>/dev/null; then
                        operation_success=true
                        operation_message="Volume mounted successfully"
                        ((successful_operations++))
                    else
                        operation_message="Volume mount failed"
                        ((failed_operations++))
                    fi
                    ;;
                    
                "unmount")
                    if diskutil unmount "$identifier" &>/dev/null; then
                        operation_success=true
                        operation_message="Volume unmounted successfully"
                        ((successful_operations++))
                    else
                        operation_message="Volume unmount failed"
                        ((failed_operations++))
                    fi
                    ;;
                    
                *)
                    operation_message="Unknown operation: $operation"
                    ((failed_operations++))
                    ;;
            esac
            
            echo "  Result: $([ "$operation_success" = "true" ] && echo "✅ SUCCESS" || echo "❌ FAILED") - $operation_message"
            
            # Add to operation report
            local operation_detail=$(cat << EOF
{
    "identifier": "$identifier",
    "name": "$name",
    "filesystem": "$filesystem",
    "success": $operation_success,
    "message": "$operation_message",
    "timestamp": "$(date -Iseconds)"
}
EOF
)
            
            jq --argjson detail "$operation_detail" \
               '.operation_details += [$detail]' \
               "$operation_report" > "${operation_report}.tmp" && mv "${operation_report}.tmp" "$operation_report"
        fi
    done < "$volume_list"
    
    # Update final statistics
    jq --argjson total "$total_volumes" \
       --argjson successful "$successful_operations" \
       --argjson failed "$failed_operations" \
       '.total_volumes = $total | .successful_operations = $successful | .failed_operations = $failed | .end_timestamp = "'"$(date -Iseconds)"'"' \
       "$operation_report" > "${operation_report}.tmp" && mv "${operation_report}.tmp" "$operation_report"
    
    echo ""
    echo "Bulk Operation Summary:"
    echo "  Total Volumes: $total_volumes"
    echo "  Successful: $successful_operations"
    echo "  Failed: $failed_operations"
    echo "  Success Rate: $(( successful_operations * 100 / total_volumes ))%"
    echo "  Operation Report: $operation_report"
    
    log_action "Bulk volume operations completed: $operation ($successful_operations/$total_volumes successful)"
    
    return 0
}

# Fleet-wide storage management
deploy_storage_management() {
    local fleet_config="$1"
    local storage_policy="$2"
    
    log_action "Starting fleet-wide storage management deployment"
    
    if [[ ! -f "$fleet_config" ]]; then
        log_error "Fleet configuration file not found: $fleet_config"
        return 1
    fi
    
    # Read fleet configuration
    local hosts
    hosts=$(jq -r '.hosts[]' "$fleet_config")
    
    echo "Deploying storage management to fleet..."
    echo "Storage Policy: $storage_policy"
    
    # Deploy to each host
    while IFS= read -r host; do
        if [[ -n "$host" ]]; then
            echo "Deploying to: $host"
            
            # Copy storage management script to remote host
            scp "$0" "root@${host}:/tmp/macfleet_storage.sh" || {
                log_error "Failed to copy script to $host"
                continue
            }
            
            # Apply storage policy on remote host
            ssh "root@${host}" "chmod +x /tmp/macfleet_storage.sh && /tmp/macfleet_storage.sh apply_policy '$storage_policy' &" || {
                log_error "Failed to apply storage policy on $host"
                continue
            }
            
            log_action "✅ Storage management deployed on: $host"
        fi
    done <<< "$hosts"
    
    log_action "Fleet storage deployment completed"
}

# Generate comprehensive storage reports
generate_storage_report() {
    local report_type="$1"
    local report_name="${2:-storage_report_$(date +%Y%m%d_%H%M%S)}"
    local report_file="$REPORTS_DIR/${report_name}.json"
    
    log_action "Generating storage report: $report_name (Type: $report_type)"
    
    case "$report_type" in
        "inventory")
            generate_storage_inventory_report "$report_file"
            ;;
        "health_assessment")
            generate_health_assessment_report "$report_file"
            ;;
        "performance_analysis")
            generate_performance_analysis_report "$report_file"
            ;;
        "security_audit")
            generate_security_audit_report "$report_file"
            ;;
        "compliance")
            generate_compliance_report "$report_file"
            ;;
        *)
            log_error "Unknown report type: $report_type"
            return 1
            ;;
    esac
    
    log_action "✅ Storage report generated: $report_file"
    echo "Report saved to: $report_file"
}

# Main function with command routing
main() {
    local command="$1"
    shift
    
    # Initialize
    create_directories
    initialize_storage_database
    
    case "$command" in
        "list")
            # Enhanced disk listing
            get_comprehensive_disk_info
            ;;
        "health_check")
            monitor_volume_health "$@"
            ;;
        "secure_erase")
            secure_erase_volume "$@"
            ;;
        "discover")
            discover_storage_infrastructure
            ;;
        "optimize")
            optimize_storage_performance "$@"
            ;;
        "bulk_operations")
            bulk_volume_operations "$@"
            ;;
        "apply_policy")
            apply_storage_policy "$@"
            ;;
        "fleet_deploy")
            deploy_storage_management "$@"
            ;;
        "generate_report")
            generate_storage_report "$@"
            ;;
        "show_categories")
            print_storage_categories
            ;;
        "show_policies")
            for policy in enterprise_standard high_security_financial creative_workflow development_environment compliance_strict performance_optimized cost_efficient; do
                echo "Policy: $policy"
                get_storage_policy "$policy" | jq .
                echo ""
            done
            ;;
        *)
            echo "MacFleet Enterprise Storage Management System"
            echo "Usage: $0 <command> [options]"
            echo ""
            echo "Commands:"
            echo "  list                                       - Enhanced disk listing and analysis"
            echo "  health_check <identifier>                 - Comprehensive volume health monitoring"
            echo "  secure_erase <identifier> <level> <name>  - Secure volume erasure"
            echo "  discover                                   - Storage infrastructure discovery"
            echo "  optimize <profile>                        - Storage performance optimization"
            echo "  bulk_operations <operation> <volume_list> - Bulk volume operations"
            echo "  apply_policy <policy>                     - Apply storage management policy"
            echo "  fleet_deploy <fleet_config> <policy>     - Deploy to fleet"
            echo "  generate_report <type> [name]             - Generate storage reports"
            echo "  show_categories                           - Show storage categories"
            echo "  show_policies                             - Show storage policies"
            echo ""
            echo "Examples:"
            echo "  $0 list"
            echo "  $0 health_check disk1s1"
            echo "  $0 secure_erase disk2s1 secure CleanDisk"
            echo "  $0 optimize performance"
            echo "  $0 generate_report inventory"
            ;;
    esac
}

# Execute main function with all arguments
main "$@"

Fleet Deployment Configuration

Fleet Configuration Example

{
    "fleet_name": "MacFleet Enterprise Storage",
    "deployment_date": "2025-07-07",
    "hosts": [
        "mac-workstation-01.company.com",
        "mac-server-01.company.com",
        "mac-dev-01.company.com"
    ],
    "storage_policies": {
        "default": "enterprise_standard",
        "secure_hosts": "high_security_financial",
        "creative_hosts": "creative_workflow"
    },
    "monitoring_schedule": {
        "health_checks": "daily",
        "performance_monitoring": "weekly",
        "security_audits": "monthly"
    }
}

Volume List Example for Bulk Operations

# Volume List for Bulk Operations
# Format: identifier|name|filesystem
disk1s1|Macintosh HD|APFS
disk2s1|Data Volume|APFS
disk3s1|Backup Drive|APFS
disk4s1|External Storage|ExFAT

Security Considerations

Storage Security

  • Encryption Enforcement - Ensure all sensitive volumes are encrypted with FileVault
  • Secure Erasure - Multiple security levels for proper data destruction
  • Access Controls - Monitor and control access to storage resources
  • Integrity Monitoring - Continuous verification of storage integrity
  • Threat Detection - Scan for unauthorized access and suspicious activities

Compliance Framework

  • Data Retention - Automated enforcement of data retention policies
  • Audit Trails - Comprehensive logging of all storage operations
  • Encryption Standards - Compliance with AES-256 and other encryption requirements
  • Backup Verification - Automated verification of backup integrity
  • Disaster Recovery - Storage-focused disaster recovery procedures

Performance Optimization

Storage Performance

  • SSD Optimization - TRIM enablement and wear leveling optimization
  • APFS Optimization - Container defragmentation and snapshot management
  • Cache Management - Intelligent caching and memory optimization
  • I/O Monitoring - Real-time monitoring of storage I/O performance
  • Bottleneck Detection - Automated detection and resolution of performance issues

Troubleshooting Guide

Common Issues

Disk Verification Failures

  • Run First Aid from Disk Utility
  • Boot from Recovery Mode for system volume repairs
  • Check for hardware issues with Apple Diagnostics

Performance Degradation

  • Monitor SMART status for drive health
  • Check for fragmentation on older filesystems
  • Verify adequate free space (maintain 10-15% free)

Mount/Unmount Issues

  • Check for processes using the volume: lsof /Volumes/VolumeName
  • Force unmount if necessary: diskutil unmount force /dev/diskXsY
  • Verify filesystem integrity before remounting

Diagnostic Commands

# Check disk health
diskutil verifyVolume disk1s1

# Monitor disk I/O
iostat -d 1

# Check filesystem usage
df -h

# View SMART status
smartctl -a /dev/disk0

Important Notes

  • Data Safety - Always backup critical data before performing disk operations
  • Administrative Privileges - Most disk operations require sudo/administrator access
  • Testing - Test all scripts on non-production systems before fleet deployment
  • Recovery Planning - Maintain bootable recovery drives and backup strategies
  • Monitoring - Implement continuous monitoring for early problem detection
  • Documentation - Keep detailed records of all storage configurations and changes

Tutorial

Neue Updates und Verbesserungen zu Macfleet.

Konfiguration eines GitHub Actions Runners auf einem Mac Mini (Apple Silicon)

GitHub Actions Runner

GitHub Actions ist eine leistungsstarke CI/CD-Plattform, die es Ihnen ermöglicht, Ihre Software-Entwicklungsworkflows zu automatisieren. Während GitHub gehostete Runner anbietet, bieten selbst-gehostete Runner erhöhte Kontrolle und Anpassung für Ihr CI/CD-Setup. Dieses Tutorial führt Sie durch die Einrichtung, Konfiguration und Verbindung eines selbst-gehosteten Runners auf einem Mac mini zur Ausführung von macOS-Pipelines.

Voraussetzungen

Bevor Sie beginnen, stellen Sie sicher, dass Sie haben:

  • Einen Mac mini (registrieren Sie sich bei Macfleet)
  • Ein GitHub-Repository mit Administratorrechten
  • Einen installierten Paketmanager (vorzugsweise Homebrew)
  • Git auf Ihrem System installiert

Schritt 1: Ein dediziertes Benutzerkonto erstellen

Erstellen Sie zunächst ein dediziertes Benutzerkonto für den GitHub Actions Runner:

# Das 'gh-runner' Benutzerkonto erstellen
sudo dscl . -create /Users/gh-runner
sudo dscl . -create /Users/gh-runner UserShell /bin/bash
sudo dscl . -create /Users/gh-runner RealName "GitHub runner"
sudo dscl . -create /Users/gh-runner UniqueID "1001"
sudo dscl . -create /Users/gh-runner PrimaryGroupID 20
sudo dscl . -create /Users/gh-runner NFSHomeDirectory /Users/gh-runner

# Das Passwort für den Benutzer setzen
sudo dscl . -passwd /Users/gh-runner ihr_passwort

# 'gh-runner' zur 'admin'-Gruppe hinzufügen
sudo dscl . -append /Groups/admin GroupMembership gh-runner

Wechseln Sie zum neuen Benutzerkonto:

su gh-runner

Schritt 2: Erforderliche Software installieren

Installieren Sie Git und Rosetta 2 (wenn Sie Apple Silicon verwenden):

# Git installieren, falls noch nicht installiert
brew install git

# Rosetta 2 für Apple Silicon Macs installieren
softwareupdate --install-rosetta

Schritt 3: Den GitHub Actions Runner konfigurieren

  1. Gehen Sie zu Ihrem GitHub-Repository
  2. Navigieren Sie zu Einstellungen > Actions > Runners

GitHub Actions Runner

  1. Klicken Sie auf "New self-hosted runner" (https://github.com/<username>/<repository>/settings/actions/runners/new)
  2. Wählen Sie macOS als Runner-Image und ARM64 als Architektur
  3. Folgen Sie den bereitgestellten Befehlen, um den Runner herunterzuladen und zu konfigurieren

GitHub Actions Runner

Erstellen Sie eine .env-Datei im _work-Verzeichnis des Runners:

# _work/.env Datei
ImageOS=macos15
XCODE_15_DEVELOPER_DIR=/Applications/Xcode.app/Contents/Developer
  1. Führen Sie das run.sh-Skript in Ihrem Runner-Verzeichnis aus, um die Einrichtung abzuschließen.
  2. Überprüfen Sie, dass der Runner aktiv ist und auf Jobs im Terminal wartet, und überprüfen Sie die GitHub-Repository-Einstellungen für die Runner-Zuordnung und den Idle-Status.

GitHub Actions Runner

Schritt 4: Sudoers konfigurieren (Optional)

Wenn Ihre Actions Root-Privilegien benötigen, konfigurieren Sie die sudoers-Datei:

sudo visudo

Fügen Sie die folgende Zeile hinzu:

gh-runner ALL=(ALL) NOPASSWD: ALL

Schritt 5: Den Runner in Workflows verwenden

Konfigurieren Sie Ihren GitHub Actions Workflow, um den selbst-gehosteten Runner zu verwenden:

name: Beispiel-Workflow

on:
  workflow_dispatch:

jobs:
  build:
    runs-on: [self-hosted, macOS, ARM64]
    steps:
      - name: NodeJS installieren
        run: brew install node

Der Runner ist bei Ihrem Repository authentifiziert und mit self-hosted, macOS und ARM64 markiert. Verwenden Sie ihn in Ihren Workflows, indem Sie diese Labels im runs-on-Feld angeben:

runs-on: [self-hosted, macOS, ARM64]

Best Practices

  • Halten Sie Ihre Runner-Software auf dem neuesten Stand
  • Überwachen Sie regelmäßig Runner-Logs auf Probleme
  • Verwenden Sie spezifische Labels für verschiedene Runner-Typen
  • Implementieren Sie angemessene Sicherheitsmaßnahmen
  • Erwägen Sie die Verwendung mehrerer Runner für Lastverteilung

Fehlerbehebung

Häufige Probleme und Lösungen:

  1. Runner verbindet sich nicht:

    • Überprüfen Sie die Netzwerkverbindung
    • Überprüfen Sie die Gültigkeit des GitHub-Tokens
    • Stellen Sie angemessene Berechtigungen sicher
  2. Build-Fehler:

    • Überprüfen Sie die Xcode-Installation
    • Überprüfen Sie erforderliche Abhängigkeiten
    • Überprüfen Sie Workflow-Logs
  3. Berechtigungsprobleme:

    • Überprüfen Sie Benutzerberechtigungen
    • Überprüfen Sie sudoers-Konfiguration
    • Überprüfen Sie Dateisystem-Berechtigungen

Fazit

Sie haben jetzt einen selbst-gehosteten GitHub Actions Runner auf Ihrem Mac mini konfiguriert. Diese Einrichtung bietet Ihnen mehr Kontrolle über Ihre CI/CD-Umgebung und ermöglicht es Ihnen, macOS-spezifische Workflows effizient auszuführen.

Denken Sie daran, Ihren Runner regelmäßig zu warten und ihn mit den neuesten Sicherheitspatches und Software-Versionen auf dem neuesten Stand zu halten.

Native App

Macfleet native App

Macfleet Installationsanleitung

Macfleet ist eine leistungsstarke Flottenmanagement-Lösung, die speziell für Cloud-gehostete Mac Mini-Umgebungen entwickelt wurde. Als Mac Mini Cloud-Hosting-Anbieter können Sie Macfleet verwenden, um Ihre gesamte Flotte virtualisierter Mac-Instanzen zu überwachen, zu verwalten und zu optimieren.

Diese Installationsanleitung führt Sie durch die Einrichtung der Macfleet-Überwachung auf macOS-, Windows- und Linux-Systemen, um eine umfassende Übersicht über Ihre Cloud-Infrastruktur zu gewährleisten.

🍎 macOS

  • Laden Sie die .dmg-Datei für Mac hier herunter
  • Doppelklicken Sie auf die heruntergeladene .dmg-Datei
  • Ziehen Sie die Macfleet-App in den Anwendungsordner
  • Werfen Sie die .dmg-Datei aus
  • Öffnen Sie Systemeinstellungen > Sicherheit & Datenschutz
    • Datenschutz-Tab > Bedienungshilfen
    • Aktivieren Sie Macfleet, um Überwachung zu erlauben
  • Starten Sie Macfleet aus den Anwendungen
  • Die Verfolgung startet automatisch

🪟 Windows

  • Laden Sie die .exe-Datei für Windows hier herunter
  • Rechtsklick auf die .exe-Datei > "Als Administrator ausführen"
  • Folgen Sie dem Installationsassistenten
  • Akzeptieren Sie die Allgemeinen Geschäftsbedingungen
  • Erlauben Sie in Windows Defender, wenn aufgefordert
  • Gewähren Sie Anwendungsüberwachungsberechtigungen
  • Starten Sie Macfleet aus dem Startmenü
  • Die Anwendung beginnt automatisch mit der Verfolgung

🐧 Linux

  • Laden Sie das .deb-Paket (Ubuntu/Debian) oder .rpm (CentOS/RHEL) hier herunter
  • Installieren Sie mit Ihrem Paketmanager
    • Ubuntu/Debian: sudo dpkg -i Macfleet-linux.deb
    • CentOS/RHEL: sudo rpm -ivh Macfleet-linux.rpm
  • Erlauben Sie X11-Zugriffsberechtigungen, wenn aufgefordert
  • Fügen Sie den Benutzer zu entsprechenden Gruppen hinzu, falls erforderlich
  • Starten Sie Macfleet aus dem Anwendungsmenü
  • Die Anwendung beginnt automatisch mit der Verfolgung

Hinweis: Nach der Installation auf allen Systemen melden Sie sich mit Ihren Macfleet-Anmeldedaten an, um Daten mit Ihrem Dashboard zu synchronisieren.