Tutorial

Neue Updates und Verbesserungen zu Macfleet.

Wichtiger Hinweis

Die in diesen Tutorials bereitgestellten Codebeispiele und Skripte dienen nur zu Bildungszwecken. Macfleet ist nicht verantwortlich für Probleme, Schäden oder Sicherheitslücken, die durch die Verwendung, Änderung oder Implementierung dieser Beispiele entstehen können. Überprüfen und testen Sie Code immer in einer sicheren Umgebung, bevor Sie ihn in Produktionssystemen verwenden.

Directory Listing and File Management on macOS

Master file and directory listing operations on your MacFleet devices using powerful command-line tools. This tutorial covers basic file listing, hidden file discovery, detailed information display, and advanced sorting techniques for effective file system management.

Understanding macOS Directory Operations

macOS provides robust command-line tools for directory management:

  • ls - List directory contents with various options
  • find - Search for files and directories with advanced criteria
  • tree - Display directory structure in tree format
  • stat - Display detailed file/directory information

Basic File Listing Operations

List All Files in Directory

#!/bin/bash

# Basic directory listing
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files in $FOLDER_PATH:"
ls "$FOLDER_PATH"

echo "Directory listing completed successfully"

Recursive Directory Listing

#!/bin/bash

# List all files recursively (including subdirectories)
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Recursive listing of $FOLDER_PATH:"
ls -R "$FOLDER_PATH"

echo "Recursive directory listing completed"

Hidden Files and Directories

View Hidden Content

#!/bin/bash

# Display all files including hidden ones
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "All files (including hidden) in $FOLDER_PATH:"
ls -a "$FOLDER_PATH"

echo "Hidden files displayed successfully"

Hidden Files Analysis

#!/bin/bash

# Comprehensive hidden file analysis
analyze_hidden_files() {
    local folder_path="$1"
    
    echo "=== Hidden Files Analysis for: $folder_path ==="
    
    # Count hidden files
    local hidden_count
    hidden_count=$(ls -la "$folder_path" | grep "^\." | wc -l)
    echo "Hidden files found: $((hidden_count - 2))" # Subtract . and ..
    
    # List hidden files with details
    echo -e "\nHidden files:"
    ls -la "$folder_path" | grep "^\." | grep -v "^total"
    
    # Check for common hidden file types
    echo -e "\nCommon hidden file types:"
    ls -a "$folder_path" | grep "^\." | grep -E "\.(DS_Store|Spotlight-V100|Trashes|fseventsd)" || echo "No common system hidden files found"
}

# Usage
analyze_hidden_files "/Users/QA/Desktop/Wallpapers"

Detailed File Information

Long Format Listing

#!/bin/bash

# Display detailed file information
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Detailed file information for $FOLDER_PATH:"
ls -l "$FOLDER_PATH"

echo "Detailed listing completed"

Human-Readable File Sizes

#!/bin/bash

# Display file sizes in human-readable format
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files with human-readable sizes in $FOLDER_PATH:"
ls -lh "$FOLDER_PATH"

echo "Human-readable listing completed"

Comprehensive File Details

#!/bin/bash

# Advanced file information display
show_file_details() {
    local folder_path="$1"
    
    echo "=== Comprehensive File Details: $folder_path ==="
    
    # Basic listing with permissions, sizes, dates
    echo "Detailed file listing:"
    ls -lah "$folder_path"
    
    echo -e "\nDirectory summary:"
    echo "Total items: $(ls -1 "$folder_path" | wc -l)"
    echo "Total size: $(du -sh "$folder_path" | cut -f1)"
    echo "Disk usage: $(du -sk "$folder_path" | cut -f1) KB"
    
    # File type breakdown
    echo -e "\nFile type analysis:"
    find "$folder_path" -maxdepth 1 -type f -exec file {} \; | cut -d: -f2 | sort | uniq -c | sort -nr
}

# Usage
show_file_details "/Users/QA/Desktop/Wallpapers"

File Sorting Options

Sort by Modification Time

#!/bin/bash

# Sort files by last modification time (newest first)
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files sorted by modification time (newest first):"
ls -t "$FOLDER_PATH"

echo "Time-based sorting completed"

Sort by File Size

#!/bin/bash

# Sort files by size (largest first)
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files sorted by size (largest first):"
ls -lhS "$FOLDER_PATH"

echo "Size-based sorting completed"

Advanced Sorting Options

#!/bin/bash

# Multiple sorting criteria
advanced_sorting() {
    local folder_path="$1"
    
    echo "=== Advanced File Sorting: $folder_path ==="
    
    echo "1. By modification time (newest first):"
    ls -lt "$folder_path" | head -10
    
    echo -e "\n2. By modification time (oldest first):"
    ls -ltr "$folder_path" | head -10
    
    echo -e "\n3. By size (largest first):"
    ls -lhS "$folder_path" | head -10
    
    echo -e "\n4. By size (smallest first):"
    ls -lhSr "$folder_path" | head -10
    
    echo -e "\n5. By name (alphabetical):"
    ls -l "$folder_path" | sort -k9
    
    echo -e "\n6. By extension:"
    ls -1 "$folder_path" | grep '\.' | sort -t. -k2
}

# Usage
advanced_sorting "/Users/QA/Desktop/Wallpapers"

Enterprise Directory Management System

#!/bin/bash

# MacFleet Directory Management Tool
# Comprehensive file and directory analysis for fleet devices

# Configuration
SCRIPT_VERSION="1.0.0"
LOG_FILE="/var/log/macfleet_directory.log"
REPORT_DIR="/etc/macfleet/reports/directory"
CONFIG_DIR="/etc/macfleet/directory"

# Create directories if they don't exist
mkdir -p "$REPORT_DIR" "$CONFIG_DIR"

# Directory categories for analysis
declare -A DIRECTORY_CATEGORIES=(
    ["system"]="/System,/Library,/usr,/bin,/sbin"
    ["user"]="/Users,/home"
    ["applications"]="/Applications,/Applications/Utilities"
    ["documents"]="/Documents,/Desktop,/Downloads"
    ["media"]="/Movies,/Music,/Pictures"
    ["temporary"]="/tmp,/var/tmp,/var/cache"
    ["logs"]="/var/log,/Library/Logs"
    ["configuration"]="/etc,/private/etc,/Library/Preferences"
    ["development"]="/usr/local,/opt,/Developer"
    ["network"]="/Network,/Volumes"
)

# Directory policies for different scanning levels
declare -A DIRECTORY_POLICIES=(
    ["quick_scan"]="basic_listing,file_count,size_summary"
    ["standard_scan"]="detailed_listing,hidden_files,type_analysis,permission_check"
    ["comprehensive_scan"]="full_recursive,security_scan,duplicate_detection,metadata_extraction"
    ["security_audit"]="permission_audit,ownership_check,sensitive_file_scan,access_log"
    ["performance_scan"]="large_file_detection,fragmentation_check,cache_analysis"
    ["compliance_scan"]="policy_validation,retention_check,classification_audit"
)

# Logging function
log_action() {
    local message="$1"
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "[$timestamp] $message" | tee -a "$LOG_FILE"
}

# Directory analysis engine
analyze_directory() {
    local target_path="$1"
    local scan_level="${2:-standard_scan}"
    local category="${3:-general}"
    
    log_action "Starting directory analysis: $target_path (Level: $scan_level, Category: $category)"
    
    if [[ ! -d "$target_path" ]]; then
        log_action "ERROR: Directory not found: $target_path"
        return 1
    fi
    
    local report_file="$REPORT_DIR/directory_analysis_$(echo "$target_path" | tr '/' '_')_$(date +%Y%m%d_%H%M%S).json"
    
    # Initialize report
    cat > "$report_file" << EOF
{
    "analysis_info": {
        "target_path": "$target_path",
        "scan_level": "$scan_level",
        "category": "$category",
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "script_version": "$SCRIPT_VERSION"
    },
    "directory_metrics": {},
    "file_analysis": {},
    "security_findings": {},
    "recommendations": []
}
EOF
    
    # Basic directory metrics
    local total_items=$(find "$target_path" -maxdepth 1 2>/dev/null | wc -l)
    local total_files=$(find "$target_path" -maxdepth 1 -type f 2>/dev/null | wc -l)
    local total_dirs=$(find "$target_path" -maxdepth 1 -type d 2>/dev/null | wc -l)
    local total_size=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    
    # Update report with metrics
    jq --argjson total_items "$((total_items - 1))" \
       --argjson total_files "$total_files" \
       --argjson total_dirs "$((total_dirs - 1))" \
       --argjson total_size_kb "$total_size" \
       '.directory_metrics = {
           "total_items": $total_items,
           "total_files": $total_files,
           "total_directories": $total_dirs,
           "total_size_kb": $total_size_kb,
           "average_file_size": (if $total_files > 0 then ($total_size_kb / $total_files) else 0 end)
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Perform analysis based on scan level
    case "$scan_level" in
        "quick_scan")
            perform_quick_scan "$target_path" "$report_file"
            ;;
        "standard_scan")
            perform_standard_scan "$target_path" "$report_file"
            ;;
        "comprehensive_scan")
            perform_comprehensive_scan "$target_path" "$report_file"
            ;;
        "security_audit")
            perform_security_audit "$target_path" "$report_file"
            ;;
        "performance_scan")
            perform_performance_scan "$target_path" "$report_file"
            ;;
        "compliance_scan")
            perform_compliance_scan "$target_path" "$report_file"
            ;;
    esac
    
    log_action "Directory analysis completed: $report_file"
    echo "$report_file"
}

# Quick scan implementation
perform_quick_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing quick scan of $target_path..."
    
    # Basic file listing
    local file_listing=$(ls -la "$target_path" 2>/dev/null | tail -n +2)
    
    # File type distribution
    local file_types=$(find "$target_path" -maxdepth 1 -type f -exec file {} \; 2>/dev/null | cut -d: -f2 | sort | uniq -c | sort -nr)
    
    # Update report
    jq --arg file_listing "$file_listing" \
       --arg file_types "$file_types" \
       '.file_analysis.quick_scan = {
           "file_listing": $file_listing,
           "file_type_distribution": $file_types
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Standard scan implementation
perform_standard_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing standard scan of $target_path..."
    
    # Detailed file analysis
    local detailed_listing=$(ls -lah "$target_path" 2>/dev/null)
    local hidden_files=$(ls -la "$target_path" 2>/dev/null | grep "^\." | wc -l)
    local executable_files=$(find "$target_path" -maxdepth 1 -type f -executable 2>/dev/null | wc -l)
    
    # File size analysis
    local large_files=$(find "$target_path" -maxdepth 1 -type f -size +10M 2>/dev/null)
    local small_files=$(find "$target_path" -maxdepth 1 -type f -size -1k 2>/dev/null | wc -l)
    
    # File age analysis
    local recent_files=$(find "$target_path" -maxdepth 1 -type f -mtime -7 2>/dev/null | wc -l)
    local old_files=$(find "$target_path" -maxdepth 1 -type f -mtime +365 2>/dev/null | wc -l)
    
    # Update report
    jq --arg detailed_listing "$detailed_listing" \
       --argjson hidden_files "$((hidden_files - 2))" \
       --argjson executable_files "$executable_files" \
       --arg large_files "$large_files" \
       --argjson small_files "$small_files" \
       --argjson recent_files "$recent_files" \
       --argjson old_files "$old_files" \
       '.file_analysis.standard_scan = {
           "detailed_listing": $detailed_listing,
           "hidden_files_count": $hidden_files,
           "executable_files_count": $executable_files,
           "large_files": $large_files,
           "small_files_count": $small_files,
           "recent_files_count": $recent_files,
           "old_files_count": $old_files
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Comprehensive scan implementation
perform_comprehensive_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing comprehensive scan of $target_path..."
    
    # Recursive analysis
    local total_recursive_items=$(find "$target_path" 2>/dev/null | wc -l)
    local max_depth=$(find "$target_path" -type d 2>/dev/null | awk -F/ '{print NF}' | sort -n | tail -1)
    
    # File extension analysis
    local extensions=$(find "$target_path" -type f 2>/dev/null | grep '\.' | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr)
    
    # Duplicate file detection
    local duplicates=$(find "$target_path" -type f 2>/dev/null -exec md5 {} \; | sort | uniq -d -w 32)
    
    # Symbolic link analysis
    local symlinks=$(find "$target_path" -type l 2>/dev/null)
    
    # Update report
    jq --argjson total_recursive "$((total_recursive_items - 1))" \
       --argjson max_depth "$((max_depth - $(echo "$target_path" | tr -cd '/' | wc -c) - 1))" \
       --arg extensions "$extensions" \
       --arg duplicates "$duplicates" \
       --arg symlinks "$symlinks" \
       '.file_analysis.comprehensive_scan = {
           "total_recursive_items": $total_recursive,
           "maximum_depth": $max_depth,
           "file_extensions": $extensions,
           "duplicate_files": $duplicates,
           "symbolic_links": $symlinks
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Security audit implementation
perform_security_audit() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing security audit of $target_path..."
    
    # Permission analysis
    local world_writable=$(find "$target_path" -type f -perm -002 2>/dev/null)
    local world_readable=$(find "$target_path" -type f -perm -004 2>/dev/null | wc -l)
    local setuid_files=$(find "$target_path" -type f -perm -4000 2>/dev/null)
    local setgid_files=$(find "$target_path" -type f -perm -2000 2>/dev/null)
    
    # Ownership analysis
    local root_owned=$(find "$target_path" -user root 2>/dev/null | wc -l)
    local no_owner=$(find "$target_path" -nouser 2>/dev/null)
    local no_group=$(find "$target_path" -nogroup 2>/dev/null)
    
    # Sensitive file patterns
    local sensitive_patterns="password|secret|key|token|credential|private"
    local sensitive_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$sensitive_patterns" 2>/dev/null)
    
    # Update report
    jq --arg world_writable "$world_writable" \
       --argjson world_readable "$world_readable" \
       --arg setuid_files "$setuid_files" \
       --arg setgid_files "$setgid_files" \
       --argjson root_owned "$root_owned" \
       --arg no_owner "$no_owner" \
       --arg no_group "$no_group" \
       --arg sensitive_files "$sensitive_files" \
       '.security_findings = {
           "world_writable_files": $world_writable,
           "world_readable_count": $world_readable,
           "setuid_files": $setuid_files,
           "setgid_files": $setgid_files,
           "root_owned_count": $root_owned,
           "orphaned_files": $no_owner,
           "orphaned_groups": $no_group,
           "sensitive_files": $sensitive_files
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Performance scan implementation
perform_performance_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing performance scan of $target_path..."
    
    # Large file analysis
    local files_over_100mb=$(find "$target_path" -type f -size +100M 2>/dev/null)
    local files_over_1gb=$(find "$target_path" -type f -size +1G 2>/dev/null | wc -l)
    
    # Cache and temporary file analysis
    local cache_files=$(find "$target_path" -name "*cache*" -o -name "*.tmp" -o -name "*.temp" 2>/dev/null)
    local log_files=$(find "$target_path" -name "*.log" 2>/dev/null)
    
    # File access patterns
    local recently_accessed=$(find "$target_path" -type f -atime -1 2>/dev/null | wc -l)
    local never_accessed=$(find "$target_path" -type f -atime +365 2>/dev/null | wc -l)
    
    # Update report
    jq --arg files_over_100mb "$files_over_100mb" \
       --argjson files_over_1gb "$files_over_1gb" \
       --arg cache_files "$cache_files" \
       --arg log_files "$log_files" \
       --argjson recently_accessed "$recently_accessed" \
       --argjson never_accessed "$never_accessed" \
       '.file_analysis.performance_scan = {
           "large_files_100mb": $files_over_100mb,
           "large_files_1gb_count": $files_over_1gb,
           "cache_temp_files": $cache_files,
           "log_files": $log_files,
           "recently_accessed_count": $recently_accessed,
           "stale_files_count": $never_accessed
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Compliance scan implementation
perform_compliance_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing compliance scan of $target_path..."
    
    # Data classification patterns
    local pii_patterns="ssn|social.security|credit.card|phone|email|address"
    local financial_patterns="account.number|routing|swift|iban|tax.id"
    local health_patterns="medical|patient|diagnosis|prescription|hipaa"
    
    local pii_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$pii_patterns" 2>/dev/null)
    local financial_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$financial_patterns" 2>/dev/null)
    local health_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$health_patterns" 2>/dev/null)
    
    # Retention analysis
    local files_over_7_years=$(find "$target_path" -type f -mtime +2555 2>/dev/null | wc -l)
    local files_over_3_years=$(find "$target_path" -type f -mtime +1095 2>/dev/null | wc -l)
    
    # Update report
    jq --arg pii_files "$pii_files" \
       --arg financial_files "$financial_files" \
       --arg health_files "$health_files" \
       --argjson files_over_7_years "$files_over_7_years" \
       --argjson files_over_3_years "$files_over_3_years" \
       '.file_analysis.compliance_scan = {
           "pii_files": $pii_files,
           "financial_files": $financial_files,
           "health_files": $health_files,
           "retention_7_years": $files_over_7_years,
           "retention_3_years": $files_over_3_years
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Fleet directory management
manage_fleet_directories() {
    local action="$1"
    local target_pattern="$2"
    local scan_level="${3:-standard_scan}"
    
    log_action "Fleet directory management: $action on $target_pattern"
    
    case "$action" in
        "scan_all")
            for category in "${!DIRECTORY_CATEGORIES[@]}"; do
                IFS=',' read -ra PATHS <<< "${DIRECTORY_CATEGORIES[$category]}"
                for path in "${PATHS[@]}"; do
                    if [[ -d "$path" ]]; then
                        analyze_directory "$path" "$scan_level" "$category"
                    fi
                done
            done
            ;;
        "scan_category")
            if [[ -n "${DIRECTORY_CATEGORIES[$target_pattern]}" ]]; then
                IFS=',' read -ra PATHS <<< "${DIRECTORY_CATEGORIES[$target_pattern]}"
                for path in "${PATHS[@]}"; do
                    if [[ -d "$path" ]]; then
                        analyze_directory "$path" "$scan_level" "$target_pattern"
                    fi
                done
            fi
            ;;
        "scan_path")
            if [[ -d "$target_pattern" ]]; then
                analyze_directory "$target_pattern" "$scan_level" "custom"
            fi
            ;;
        "generate_report")
            generate_fleet_report
            ;;
    esac
}

# Generate comprehensive fleet report
generate_fleet_report() {
    local fleet_report="$REPORT_DIR/fleet_directory_report_$(date +%Y%m%d_%H%M%S).json"
    
    echo "Generating fleet directory report..."
    
    # Combine all individual reports
    local reports=($(find "$REPORT_DIR" -name "directory_analysis_*.json" -mtime -1))
    
    cat > "$fleet_report" << EOF
{
    "fleet_report": {
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "total_reports": ${#reports[@]},
        "reports": []
    }
}
EOF
    
    for report in "${reports[@]}"; do
        if [[ -f "$report" ]]; then
            jq --slurpfile new_report "$report" '.fleet_report.reports += $new_report' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
        fi
    done
    
    log_action "Fleet report generated: $fleet_report"
    echo "$fleet_report"
}

# Main execution function
main() {
    local action="${1:-analyze}"
    local target="${2:-/Users}"
    local scan_level="${3:-standard_scan}"
    
    log_action "=== MacFleet Directory Management Started ==="
    log_action "Action: $action, Target: $target, Scan Level: $scan_level"
    
    case "$action" in
        "analyze")
            analyze_directory "$target" "$scan_level"
            ;;
        "fleet-scan")
            manage_fleet_directories "scan_all" "" "$scan_level"
            ;;
        "category-scan")
            manage_fleet_directories "scan_category" "$target" "$scan_level"
            ;;
        "fleet-report")
            manage_fleet_directories "generate_report"
            ;;
        "help")
            echo "Usage: $0 [action] [target] [scan_level]"
            echo "Actions: analyze, fleet-scan, category-scan, fleet-report, help"
            echo "Scan Levels: quick_scan, standard_scan, comprehensive_scan, security_audit, performance_scan, compliance_scan"
            echo "Categories: ${!DIRECTORY_CATEGORIES[*]}"
            ;;
        *)
            log_action "ERROR: Unknown action: $action"
            exit 1
            ;;
    esac
    
    log_action "=== Directory management completed ==="
}

# Execute main function
main "$@"

Directory Listing Best Practices

Handle Spaces in File Names

# Safe handling of file names with spaces
while IFS= read -r -d '' file; do
    echo "Processing: $file"
    # Your processing logic here
done < <(find "/path/with spaces" -type f -print0)

Filter by File Types

# List only image files
ls -la /Users/QA/Desktop/Wallpapers/*.{jpg,jpeg,png,gif,bmp} 2>/dev/null

# List only text files
find /path/to/directory -name "*.txt" -o -name "*.md" -o -name "*.doc"

Performance Considerations

# For large directories, use find with limited depth
find /large/directory -maxdepth 2 -type f | head -100

# Use xargs for processing many files
find /directory -name "*.log" | xargs wc -l

Common Directory Operations

Directory Size Analysis

#!/bin/bash

# Analyze directory sizes
analyze_directory_sizes() {
    local base_path="$1"
    
    echo "Directory size analysis for: $base_path"
    echo "=================================="
    
    # Top 10 largest subdirectories
    du -h "$base_path"/* 2>/dev/null | sort -hr | head -10
    
    echo -e "\nTotal directory size:"
    du -sh "$base_path"
}

analyze_directory_sizes "/Users"

File Count by Type

#!/bin/bash

# Count files by extension
count_files_by_type() {
    local directory="$1"
    
    echo "File type distribution in: $directory"
    find "$directory" -type f | grep '\.' | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr
}

count_files_by_type "/Users/QA/Desktop/Wallpapers"

Important Notes

  • Use quotes around paths containing spaces: ls "/path with spaces"
  • Escape special characters with backslash when not using quotes
  • Check permissions before accessing system directories
  • Test scripts on sample directories before production use
  • Monitor performance when analyzing large directory structures
  • Consider security when listing sensitive directories in logs

Tutorial

Neue Updates und Verbesserungen zu Macfleet.

Konfiguration eines GitHub Actions Runners auf einem Mac Mini (Apple Silicon)

GitHub Actions Runner

GitHub Actions ist eine leistungsstarke CI/CD-Plattform, die es Ihnen ermöglicht, Ihre Software-Entwicklungsworkflows zu automatisieren. Während GitHub gehostete Runner anbietet, bieten selbst-gehostete Runner erhöhte Kontrolle und Anpassung für Ihr CI/CD-Setup. Dieses Tutorial führt Sie durch die Einrichtung, Konfiguration und Verbindung eines selbst-gehosteten Runners auf einem Mac mini zur Ausführung von macOS-Pipelines.

Voraussetzungen

Bevor Sie beginnen, stellen Sie sicher, dass Sie haben:

  • Einen Mac mini (registrieren Sie sich bei Macfleet)
  • Ein GitHub-Repository mit Administratorrechten
  • Einen installierten Paketmanager (vorzugsweise Homebrew)
  • Git auf Ihrem System installiert

Schritt 1: Ein dediziertes Benutzerkonto erstellen

Erstellen Sie zunächst ein dediziertes Benutzerkonto für den GitHub Actions Runner:

# Das 'gh-runner' Benutzerkonto erstellen
sudo dscl . -create /Users/gh-runner
sudo dscl . -create /Users/gh-runner UserShell /bin/bash
sudo dscl . -create /Users/gh-runner RealName "GitHub runner"
sudo dscl . -create /Users/gh-runner UniqueID "1001"
sudo dscl . -create /Users/gh-runner PrimaryGroupID 20
sudo dscl . -create /Users/gh-runner NFSHomeDirectory /Users/gh-runner

# Das Passwort für den Benutzer setzen
sudo dscl . -passwd /Users/gh-runner ihr_passwort

# 'gh-runner' zur 'admin'-Gruppe hinzufügen
sudo dscl . -append /Groups/admin GroupMembership gh-runner

Wechseln Sie zum neuen Benutzerkonto:

su gh-runner

Schritt 2: Erforderliche Software installieren

Installieren Sie Git und Rosetta 2 (wenn Sie Apple Silicon verwenden):

# Git installieren, falls noch nicht installiert
brew install git

# Rosetta 2 für Apple Silicon Macs installieren
softwareupdate --install-rosetta

Schritt 3: Den GitHub Actions Runner konfigurieren

  1. Gehen Sie zu Ihrem GitHub-Repository
  2. Navigieren Sie zu Einstellungen > Actions > Runners

GitHub Actions Runner

  1. Klicken Sie auf "New self-hosted runner" (https://github.com/<username>/<repository>/settings/actions/runners/new)
  2. Wählen Sie macOS als Runner-Image und ARM64 als Architektur
  3. Folgen Sie den bereitgestellten Befehlen, um den Runner herunterzuladen und zu konfigurieren

GitHub Actions Runner

Erstellen Sie eine .env-Datei im _work-Verzeichnis des Runners:

# _work/.env Datei
ImageOS=macos15
XCODE_15_DEVELOPER_DIR=/Applications/Xcode.app/Contents/Developer
  1. Führen Sie das run.sh-Skript in Ihrem Runner-Verzeichnis aus, um die Einrichtung abzuschließen.
  2. Überprüfen Sie, dass der Runner aktiv ist und auf Jobs im Terminal wartet, und überprüfen Sie die GitHub-Repository-Einstellungen für die Runner-Zuordnung und den Idle-Status.

GitHub Actions Runner

Schritt 4: Sudoers konfigurieren (Optional)

Wenn Ihre Actions Root-Privilegien benötigen, konfigurieren Sie die sudoers-Datei:

sudo visudo

Fügen Sie die folgende Zeile hinzu:

gh-runner ALL=(ALL) NOPASSWD: ALL

Schritt 5: Den Runner in Workflows verwenden

Konfigurieren Sie Ihren GitHub Actions Workflow, um den selbst-gehosteten Runner zu verwenden:

name: Beispiel-Workflow

on:
  workflow_dispatch:

jobs:
  build:
    runs-on: [self-hosted, macOS, ARM64]
    steps:
      - name: NodeJS installieren
        run: brew install node

Der Runner ist bei Ihrem Repository authentifiziert und mit self-hosted, macOS und ARM64 markiert. Verwenden Sie ihn in Ihren Workflows, indem Sie diese Labels im runs-on-Feld angeben:

runs-on: [self-hosted, macOS, ARM64]

Best Practices

  • Halten Sie Ihre Runner-Software auf dem neuesten Stand
  • Überwachen Sie regelmäßig Runner-Logs auf Probleme
  • Verwenden Sie spezifische Labels für verschiedene Runner-Typen
  • Implementieren Sie angemessene Sicherheitsmaßnahmen
  • Erwägen Sie die Verwendung mehrerer Runner für Lastverteilung

Fehlerbehebung

Häufige Probleme und Lösungen:

  1. Runner verbindet sich nicht:

    • Überprüfen Sie die Netzwerkverbindung
    • Überprüfen Sie die Gültigkeit des GitHub-Tokens
    • Stellen Sie angemessene Berechtigungen sicher
  2. Build-Fehler:

    • Überprüfen Sie die Xcode-Installation
    • Überprüfen Sie erforderliche Abhängigkeiten
    • Überprüfen Sie Workflow-Logs
  3. Berechtigungsprobleme:

    • Überprüfen Sie Benutzerberechtigungen
    • Überprüfen Sie sudoers-Konfiguration
    • Überprüfen Sie Dateisystem-Berechtigungen

Fazit

Sie haben jetzt einen selbst-gehosteten GitHub Actions Runner auf Ihrem Mac mini konfiguriert. Diese Einrichtung bietet Ihnen mehr Kontrolle über Ihre CI/CD-Umgebung und ermöglicht es Ihnen, macOS-spezifische Workflows effizient auszuführen.

Denken Sie daran, Ihren Runner regelmäßig zu warten und ihn mit den neuesten Sicherheitspatches und Software-Versionen auf dem neuesten Stand zu halten.

Native App

Macfleet native App

Macfleet Installationsanleitung

Macfleet ist eine leistungsstarke Flottenmanagement-Lösung, die speziell für Cloud-gehostete Mac Mini-Umgebungen entwickelt wurde. Als Mac Mini Cloud-Hosting-Anbieter können Sie Macfleet verwenden, um Ihre gesamte Flotte virtualisierter Mac-Instanzen zu überwachen, zu verwalten und zu optimieren.

Diese Installationsanleitung führt Sie durch die Einrichtung der Macfleet-Überwachung auf macOS-, Windows- und Linux-Systemen, um eine umfassende Übersicht über Ihre Cloud-Infrastruktur zu gewährleisten.

🍎 macOS

  • Laden Sie die .dmg-Datei für Mac hier herunter
  • Doppelklicken Sie auf die heruntergeladene .dmg-Datei
  • Ziehen Sie die Macfleet-App in den Anwendungsordner
  • Werfen Sie die .dmg-Datei aus
  • Öffnen Sie Systemeinstellungen > Sicherheit & Datenschutz
    • Datenschutz-Tab > Bedienungshilfen
    • Aktivieren Sie Macfleet, um Überwachung zu erlauben
  • Starten Sie Macfleet aus den Anwendungen
  • Die Verfolgung startet automatisch

🪟 Windows

  • Laden Sie die .exe-Datei für Windows hier herunter
  • Rechtsklick auf die .exe-Datei > "Als Administrator ausführen"
  • Folgen Sie dem Installationsassistenten
  • Akzeptieren Sie die Allgemeinen Geschäftsbedingungen
  • Erlauben Sie in Windows Defender, wenn aufgefordert
  • Gewähren Sie Anwendungsüberwachungsberechtigungen
  • Starten Sie Macfleet aus dem Startmenü
  • Die Anwendung beginnt automatisch mit der Verfolgung

🐧 Linux

  • Laden Sie das .deb-Paket (Ubuntu/Debian) oder .rpm (CentOS/RHEL) hier herunter
  • Installieren Sie mit Ihrem Paketmanager
    • Ubuntu/Debian: sudo dpkg -i Macfleet-linux.deb
    • CentOS/RHEL: sudo rpm -ivh Macfleet-linux.rpm
  • Erlauben Sie X11-Zugriffsberechtigungen, wenn aufgefordert
  • Fügen Sie den Benutzer zu entsprechenden Gruppen hinzu, falls erforderlich
  • Starten Sie Macfleet aus dem Anwendungsmenü
  • Die Anwendung beginnt automatisch mit der Verfolgung

Hinweis: Nach der Installation auf allen Systemen melden Sie sich mit Ihren Macfleet-Anmeldedaten an, um Daten mit Ihrem Dashboard zu synchronisieren.