Tutorial

Nuevas actualizaciones y mejoras para Macfleet.

Aviso importante

Los ejemplos de código y scripts proporcionados en estos tutoriales son solo para propósitos educativos. Macfleet no es responsable de ningún problema, daño o vulnerabilidad de seguridad que pueda surgir del uso, modificación o implementación de estos ejemplos. Siempre revisa y prueba el código en un entorno seguro antes de usarlo en sistemas de producción.

Directory Listing and File Management on macOS

Master file and directory listing operations on your MacFleet devices using powerful command-line tools. This tutorial covers basic file listing, hidden file discovery, detailed information display, and advanced sorting techniques for effective file system management.

Understanding macOS Directory Operations

macOS provides robust command-line tools for directory management:

  • ls - List directory contents with various options
  • find - Search for files and directories with advanced criteria
  • tree - Display directory structure in tree format
  • stat - Display detailed file/directory information

Basic File Listing Operations

List All Files in Directory

#!/bin/bash

# Basic directory listing
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files in $FOLDER_PATH:"
ls "$FOLDER_PATH"

echo "Directory listing completed successfully"

Recursive Directory Listing

#!/bin/bash

# List all files recursively (including subdirectories)
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Recursive listing of $FOLDER_PATH:"
ls -R "$FOLDER_PATH"

echo "Recursive directory listing completed"

Hidden Files and Directories

View Hidden Content

#!/bin/bash

# Display all files including hidden ones
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "All files (including hidden) in $FOLDER_PATH:"
ls -a "$FOLDER_PATH"

echo "Hidden files displayed successfully"

Hidden Files Analysis

#!/bin/bash

# Comprehensive hidden file analysis
analyze_hidden_files() {
    local folder_path="$1"
    
    echo "=== Hidden Files Analysis for: $folder_path ==="
    
    # Count hidden files
    local hidden_count
    hidden_count=$(ls -la "$folder_path" | grep "^\." | wc -l)
    echo "Hidden files found: $((hidden_count - 2))" # Subtract . and ..
    
    # List hidden files with details
    echo -e "\nHidden files:"
    ls -la "$folder_path" | grep "^\." | grep -v "^total"
    
    # Check for common hidden file types
    echo -e "\nCommon hidden file types:"
    ls -a "$folder_path" | grep "^\." | grep -E "\.(DS_Store|Spotlight-V100|Trashes|fseventsd)" || echo "No common system hidden files found"
}

# Usage
analyze_hidden_files "/Users/QA/Desktop/Wallpapers"

Detailed File Information

Long Format Listing

#!/bin/bash

# Display detailed file information
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Detailed file information for $FOLDER_PATH:"
ls -l "$FOLDER_PATH"

echo "Detailed listing completed"

Human-Readable File Sizes

#!/bin/bash

# Display file sizes in human-readable format
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files with human-readable sizes in $FOLDER_PATH:"
ls -lh "$FOLDER_PATH"

echo "Human-readable listing completed"

Comprehensive File Details

#!/bin/bash

# Advanced file information display
show_file_details() {
    local folder_path="$1"
    
    echo "=== Comprehensive File Details: $folder_path ==="
    
    # Basic listing with permissions, sizes, dates
    echo "Detailed file listing:"
    ls -lah "$folder_path"
    
    echo -e "\nDirectory summary:"
    echo "Total items: $(ls -1 "$folder_path" | wc -l)"
    echo "Total size: $(du -sh "$folder_path" | cut -f1)"
    echo "Disk usage: $(du -sk "$folder_path" | cut -f1) KB"
    
    # File type breakdown
    echo -e "\nFile type analysis:"
    find "$folder_path" -maxdepth 1 -type f -exec file {} \; | cut -d: -f2 | sort | uniq -c | sort -nr
}

# Usage
show_file_details "/Users/QA/Desktop/Wallpapers"

File Sorting Options

Sort by Modification Time

#!/bin/bash

# Sort files by last modification time (newest first)
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files sorted by modification time (newest first):"
ls -t "$FOLDER_PATH"

echo "Time-based sorting completed"

Sort by File Size

#!/bin/bash

# Sort files by size (largest first)
FOLDER_PATH="/Users/QA/Desktop/Wallpapers"

echo "Files sorted by size (largest first):"
ls -lhS "$FOLDER_PATH"

echo "Size-based sorting completed"

Advanced Sorting Options

#!/bin/bash

# Multiple sorting criteria
advanced_sorting() {
    local folder_path="$1"
    
    echo "=== Advanced File Sorting: $folder_path ==="
    
    echo "1. By modification time (newest first):"
    ls -lt "$folder_path" | head -10
    
    echo -e "\n2. By modification time (oldest first):"
    ls -ltr "$folder_path" | head -10
    
    echo -e "\n3. By size (largest first):"
    ls -lhS "$folder_path" | head -10
    
    echo -e "\n4. By size (smallest first):"
    ls -lhSr "$folder_path" | head -10
    
    echo -e "\n5. By name (alphabetical):"
    ls -l "$folder_path" | sort -k9
    
    echo -e "\n6. By extension:"
    ls -1 "$folder_path" | grep '\.' | sort -t. -k2
}

# Usage
advanced_sorting "/Users/QA/Desktop/Wallpapers"

Enterprise Directory Management System

#!/bin/bash

# MacFleet Directory Management Tool
# Comprehensive file and directory analysis for fleet devices

# Configuration
SCRIPT_VERSION="1.0.0"
LOG_FILE="/var/log/macfleet_directory.log"
REPORT_DIR="/etc/macfleet/reports/directory"
CONFIG_DIR="/etc/macfleet/directory"

# Create directories if they don't exist
mkdir -p "$REPORT_DIR" "$CONFIG_DIR"

# Directory categories for analysis
declare -A DIRECTORY_CATEGORIES=(
    ["system"]="/System,/Library,/usr,/bin,/sbin"
    ["user"]="/Users,/home"
    ["applications"]="/Applications,/Applications/Utilities"
    ["documents"]="/Documents,/Desktop,/Downloads"
    ["media"]="/Movies,/Music,/Pictures"
    ["temporary"]="/tmp,/var/tmp,/var/cache"
    ["logs"]="/var/log,/Library/Logs"
    ["configuration"]="/etc,/private/etc,/Library/Preferences"
    ["development"]="/usr/local,/opt,/Developer"
    ["network"]="/Network,/Volumes"
)

# Directory policies for different scanning levels
declare -A DIRECTORY_POLICIES=(
    ["quick_scan"]="basic_listing,file_count,size_summary"
    ["standard_scan"]="detailed_listing,hidden_files,type_analysis,permission_check"
    ["comprehensive_scan"]="full_recursive,security_scan,duplicate_detection,metadata_extraction"
    ["security_audit"]="permission_audit,ownership_check,sensitive_file_scan,access_log"
    ["performance_scan"]="large_file_detection,fragmentation_check,cache_analysis"
    ["compliance_scan"]="policy_validation,retention_check,classification_audit"
)

# Logging function
log_action() {
    local message="$1"
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "[$timestamp] $message" | tee -a "$LOG_FILE"
}

# Directory analysis engine
analyze_directory() {
    local target_path="$1"
    local scan_level="${2:-standard_scan}"
    local category="${3:-general}"
    
    log_action "Starting directory analysis: $target_path (Level: $scan_level, Category: $category)"
    
    if [[ ! -d "$target_path" ]]; then
        log_action "ERROR: Directory not found: $target_path"
        return 1
    fi
    
    local report_file="$REPORT_DIR/directory_analysis_$(echo "$target_path" | tr '/' '_')_$(date +%Y%m%d_%H%M%S).json"
    
    # Initialize report
    cat > "$report_file" << EOF
{
    "analysis_info": {
        "target_path": "$target_path",
        "scan_level": "$scan_level",
        "category": "$category",
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "script_version": "$SCRIPT_VERSION"
    },
    "directory_metrics": {},
    "file_analysis": {},
    "security_findings": {},
    "recommendations": []
}
EOF
    
    # Basic directory metrics
    local total_items=$(find "$target_path" -maxdepth 1 2>/dev/null | wc -l)
    local total_files=$(find "$target_path" -maxdepth 1 -type f 2>/dev/null | wc -l)
    local total_dirs=$(find "$target_path" -maxdepth 1 -type d 2>/dev/null | wc -l)
    local total_size=$(du -sk "$target_path" 2>/dev/null | cut -f1)
    
    # Update report with metrics
    jq --argjson total_items "$((total_items - 1))" \
       --argjson total_files "$total_files" \
       --argjson total_dirs "$((total_dirs - 1))" \
       --argjson total_size_kb "$total_size" \
       '.directory_metrics = {
           "total_items": $total_items,
           "total_files": $total_files,
           "total_directories": $total_dirs,
           "total_size_kb": $total_size_kb,
           "average_file_size": (if $total_files > 0 then ($total_size_kb / $total_files) else 0 end)
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
    
    # Perform analysis based on scan level
    case "$scan_level" in
        "quick_scan")
            perform_quick_scan "$target_path" "$report_file"
            ;;
        "standard_scan")
            perform_standard_scan "$target_path" "$report_file"
            ;;
        "comprehensive_scan")
            perform_comprehensive_scan "$target_path" "$report_file"
            ;;
        "security_audit")
            perform_security_audit "$target_path" "$report_file"
            ;;
        "performance_scan")
            perform_performance_scan "$target_path" "$report_file"
            ;;
        "compliance_scan")
            perform_compliance_scan "$target_path" "$report_file"
            ;;
    esac
    
    log_action "Directory analysis completed: $report_file"
    echo "$report_file"
}

# Quick scan implementation
perform_quick_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing quick scan of $target_path..."
    
    # Basic file listing
    local file_listing=$(ls -la "$target_path" 2>/dev/null | tail -n +2)
    
    # File type distribution
    local file_types=$(find "$target_path" -maxdepth 1 -type f -exec file {} \; 2>/dev/null | cut -d: -f2 | sort | uniq -c | sort -nr)
    
    # Update report
    jq --arg file_listing "$file_listing" \
       --arg file_types "$file_types" \
       '.file_analysis.quick_scan = {
           "file_listing": $file_listing,
           "file_type_distribution": $file_types
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Standard scan implementation
perform_standard_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing standard scan of $target_path..."
    
    # Detailed file analysis
    local detailed_listing=$(ls -lah "$target_path" 2>/dev/null)
    local hidden_files=$(ls -la "$target_path" 2>/dev/null | grep "^\." | wc -l)
    local executable_files=$(find "$target_path" -maxdepth 1 -type f -executable 2>/dev/null | wc -l)
    
    # File size analysis
    local large_files=$(find "$target_path" -maxdepth 1 -type f -size +10M 2>/dev/null)
    local small_files=$(find "$target_path" -maxdepth 1 -type f -size -1k 2>/dev/null | wc -l)
    
    # File age analysis
    local recent_files=$(find "$target_path" -maxdepth 1 -type f -mtime -7 2>/dev/null | wc -l)
    local old_files=$(find "$target_path" -maxdepth 1 -type f -mtime +365 2>/dev/null | wc -l)
    
    # Update report
    jq --arg detailed_listing "$detailed_listing" \
       --argjson hidden_files "$((hidden_files - 2))" \
       --argjson executable_files "$executable_files" \
       --arg large_files "$large_files" \
       --argjson small_files "$small_files" \
       --argjson recent_files "$recent_files" \
       --argjson old_files "$old_files" \
       '.file_analysis.standard_scan = {
           "detailed_listing": $detailed_listing,
           "hidden_files_count": $hidden_files,
           "executable_files_count": $executable_files,
           "large_files": $large_files,
           "small_files_count": $small_files,
           "recent_files_count": $recent_files,
           "old_files_count": $old_files
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Comprehensive scan implementation
perform_comprehensive_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing comprehensive scan of $target_path..."
    
    # Recursive analysis
    local total_recursive_items=$(find "$target_path" 2>/dev/null | wc -l)
    local max_depth=$(find "$target_path" -type d 2>/dev/null | awk -F/ '{print NF}' | sort -n | tail -1)
    
    # File extension analysis
    local extensions=$(find "$target_path" -type f 2>/dev/null | grep '\.' | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr)
    
    # Duplicate file detection
    local duplicates=$(find "$target_path" -type f 2>/dev/null -exec md5 {} \; | sort | uniq -d -w 32)
    
    # Symbolic link analysis
    local symlinks=$(find "$target_path" -type l 2>/dev/null)
    
    # Update report
    jq --argjson total_recursive "$((total_recursive_items - 1))" \
       --argjson max_depth "$((max_depth - $(echo "$target_path" | tr -cd '/' | wc -c) - 1))" \
       --arg extensions "$extensions" \
       --arg duplicates "$duplicates" \
       --arg symlinks "$symlinks" \
       '.file_analysis.comprehensive_scan = {
           "total_recursive_items": $total_recursive,
           "maximum_depth": $max_depth,
           "file_extensions": $extensions,
           "duplicate_files": $duplicates,
           "symbolic_links": $symlinks
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Security audit implementation
perform_security_audit() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing security audit of $target_path..."
    
    # Permission analysis
    local world_writable=$(find "$target_path" -type f -perm -002 2>/dev/null)
    local world_readable=$(find "$target_path" -type f -perm -004 2>/dev/null | wc -l)
    local setuid_files=$(find "$target_path" -type f -perm -4000 2>/dev/null)
    local setgid_files=$(find "$target_path" -type f -perm -2000 2>/dev/null)
    
    # Ownership analysis
    local root_owned=$(find "$target_path" -user root 2>/dev/null | wc -l)
    local no_owner=$(find "$target_path" -nouser 2>/dev/null)
    local no_group=$(find "$target_path" -nogroup 2>/dev/null)
    
    # Sensitive file patterns
    local sensitive_patterns="password|secret|key|token|credential|private"
    local sensitive_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$sensitive_patterns" 2>/dev/null)
    
    # Update report
    jq --arg world_writable "$world_writable" \
       --argjson world_readable "$world_readable" \
       --arg setuid_files "$setuid_files" \
       --arg setgid_files "$setgid_files" \
       --argjson root_owned "$root_owned" \
       --arg no_owner "$no_owner" \
       --arg no_group "$no_group" \
       --arg sensitive_files "$sensitive_files" \
       '.security_findings = {
           "world_writable_files": $world_writable,
           "world_readable_count": $world_readable,
           "setuid_files": $setuid_files,
           "setgid_files": $setgid_files,
           "root_owned_count": $root_owned,
           "orphaned_files": $no_owner,
           "orphaned_groups": $no_group,
           "sensitive_files": $sensitive_files
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Performance scan implementation
perform_performance_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing performance scan of $target_path..."
    
    # Large file analysis
    local files_over_100mb=$(find "$target_path" -type f -size +100M 2>/dev/null)
    local files_over_1gb=$(find "$target_path" -type f -size +1G 2>/dev/null | wc -l)
    
    # Cache and temporary file analysis
    local cache_files=$(find "$target_path" -name "*cache*" -o -name "*.tmp" -o -name "*.temp" 2>/dev/null)
    local log_files=$(find "$target_path" -name "*.log" 2>/dev/null)
    
    # File access patterns
    local recently_accessed=$(find "$target_path" -type f -atime -1 2>/dev/null | wc -l)
    local never_accessed=$(find "$target_path" -type f -atime +365 2>/dev/null | wc -l)
    
    # Update report
    jq --arg files_over_100mb "$files_over_100mb" \
       --argjson files_over_1gb "$files_over_1gb" \
       --arg cache_files "$cache_files" \
       --arg log_files "$log_files" \
       --argjson recently_accessed "$recently_accessed" \
       --argjson never_accessed "$never_accessed" \
       '.file_analysis.performance_scan = {
           "large_files_100mb": $files_over_100mb,
           "large_files_1gb_count": $files_over_1gb,
           "cache_temp_files": $cache_files,
           "log_files": $log_files,
           "recently_accessed_count": $recently_accessed,
           "stale_files_count": $never_accessed
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Compliance scan implementation
perform_compliance_scan() {
    local target_path="$1"
    local report_file="$2"
    
    echo "Performing compliance scan of $target_path..."
    
    # Data classification patterns
    local pii_patterns="ssn|social.security|credit.card|phone|email|address"
    local financial_patterns="account.number|routing|swift|iban|tax.id"
    local health_patterns="medical|patient|diagnosis|prescription|hipaa"
    
    local pii_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$pii_patterns" 2>/dev/null)
    local financial_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$financial_patterns" 2>/dev/null)
    local health_files=$(find "$target_path" -type f 2>/dev/null | xargs grep -l -i "$health_patterns" 2>/dev/null)
    
    # Retention analysis
    local files_over_7_years=$(find "$target_path" -type f -mtime +2555 2>/dev/null | wc -l)
    local files_over_3_years=$(find "$target_path" -type f -mtime +1095 2>/dev/null | wc -l)
    
    # Update report
    jq --arg pii_files "$pii_files" \
       --arg financial_files "$financial_files" \
       --arg health_files "$health_files" \
       --argjson files_over_7_years "$files_over_7_years" \
       --argjson files_over_3_years "$files_over_3_years" \
       '.file_analysis.compliance_scan = {
           "pii_files": $pii_files,
           "financial_files": $financial_files,
           "health_files": $health_files,
           "retention_7_years": $files_over_7_years,
           "retention_3_years": $files_over_3_years
       }' "$report_file" > "${report_file}.tmp" && mv "${report_file}.tmp" "$report_file"
}

# Fleet directory management
manage_fleet_directories() {
    local action="$1"
    local target_pattern="$2"
    local scan_level="${3:-standard_scan}"
    
    log_action "Fleet directory management: $action on $target_pattern"
    
    case "$action" in
        "scan_all")
            for category in "${!DIRECTORY_CATEGORIES[@]}"; do
                IFS=',' read -ra PATHS <<< "${DIRECTORY_CATEGORIES[$category]}"
                for path in "${PATHS[@]}"; do
                    if [[ -d "$path" ]]; then
                        analyze_directory "$path" "$scan_level" "$category"
                    fi
                done
            done
            ;;
        "scan_category")
            if [[ -n "${DIRECTORY_CATEGORIES[$target_pattern]}" ]]; then
                IFS=',' read -ra PATHS <<< "${DIRECTORY_CATEGORIES[$target_pattern]}"
                for path in "${PATHS[@]}"; do
                    if [[ -d "$path" ]]; then
                        analyze_directory "$path" "$scan_level" "$target_pattern"
                    fi
                done
            fi
            ;;
        "scan_path")
            if [[ -d "$target_pattern" ]]; then
                analyze_directory "$target_pattern" "$scan_level" "custom"
            fi
            ;;
        "generate_report")
            generate_fleet_report
            ;;
    esac
}

# Generate comprehensive fleet report
generate_fleet_report() {
    local fleet_report="$REPORT_DIR/fleet_directory_report_$(date +%Y%m%d_%H%M%S).json"
    
    echo "Generating fleet directory report..."
    
    # Combine all individual reports
    local reports=($(find "$REPORT_DIR" -name "directory_analysis_*.json" -mtime -1))
    
    cat > "$fleet_report" << EOF
{
    "fleet_report": {
        "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
        "hostname": "$(hostname)",
        "total_reports": ${#reports[@]},
        "reports": []
    }
}
EOF
    
    for report in "${reports[@]}"; do
        if [[ -f "$report" ]]; then
            jq --slurpfile new_report "$report" '.fleet_report.reports += $new_report' "$fleet_report" > "${fleet_report}.tmp" && mv "${fleet_report}.tmp" "$fleet_report"
        fi
    done
    
    log_action "Fleet report generated: $fleet_report"
    echo "$fleet_report"
}

# Main execution function
main() {
    local action="${1:-analyze}"
    local target="${2:-/Users}"
    local scan_level="${3:-standard_scan}"
    
    log_action "=== MacFleet Directory Management Started ==="
    log_action "Action: $action, Target: $target, Scan Level: $scan_level"
    
    case "$action" in
        "analyze")
            analyze_directory "$target" "$scan_level"
            ;;
        "fleet-scan")
            manage_fleet_directories "scan_all" "" "$scan_level"
            ;;
        "category-scan")
            manage_fleet_directories "scan_category" "$target" "$scan_level"
            ;;
        "fleet-report")
            manage_fleet_directories "generate_report"
            ;;
        "help")
            echo "Usage: $0 [action] [target] [scan_level]"
            echo "Actions: analyze, fleet-scan, category-scan, fleet-report, help"
            echo "Scan Levels: quick_scan, standard_scan, comprehensive_scan, security_audit, performance_scan, compliance_scan"
            echo "Categories: ${!DIRECTORY_CATEGORIES[*]}"
            ;;
        *)
            log_action "ERROR: Unknown action: $action"
            exit 1
            ;;
    esac
    
    log_action "=== Directory management completed ==="
}

# Execute main function
main "$@"

Directory Listing Best Practices

Handle Spaces in File Names

# Safe handling of file names with spaces
while IFS= read -r -d '' file; do
    echo "Processing: $file"
    # Your processing logic here
done < <(find "/path/with spaces" -type f -print0)

Filter by File Types

# List only image files
ls -la /Users/QA/Desktop/Wallpapers/*.{jpg,jpeg,png,gif,bmp} 2>/dev/null

# List only text files
find /path/to/directory -name "*.txt" -o -name "*.md" -o -name "*.doc"

Performance Considerations

# For large directories, use find with limited depth
find /large/directory -maxdepth 2 -type f | head -100

# Use xargs for processing many files
find /directory -name "*.log" | xargs wc -l

Common Directory Operations

Directory Size Analysis

#!/bin/bash

# Analyze directory sizes
analyze_directory_sizes() {
    local base_path="$1"
    
    echo "Directory size analysis for: $base_path"
    echo "=================================="
    
    # Top 10 largest subdirectories
    du -h "$base_path"/* 2>/dev/null | sort -hr | head -10
    
    echo -e "\nTotal directory size:"
    du -sh "$base_path"
}

analyze_directory_sizes "/Users"

File Count by Type

#!/bin/bash

# Count files by extension
count_files_by_type() {
    local directory="$1"
    
    echo "File type distribution in: $directory"
    find "$directory" -type f | grep '\.' | rev | cut -d. -f1 | rev | sort | uniq -c | sort -nr
}

count_files_by_type "/Users/QA/Desktop/Wallpapers"

Important Notes

  • Use quotes around paths containing spaces: ls "/path with spaces"
  • Escape special characters with backslash when not using quotes
  • Check permissions before accessing system directories
  • Test scripts on sample directories before production use
  • Monitor performance when analyzing large directory structures
  • Consider security when listing sensitive directories in logs

Tutorial

Nuevas actualizaciones y mejoras para Macfleet.

Configurando un Runner de GitHub Actions en un Mac Mini (Apple Silicon)

Runner de GitHub Actions

GitHub Actions es una plataforma poderosa de CI/CD que te permite automatizar tus flujos de trabajo de desarrollo de software. Aunque GitHub ofrece runners hospedados, los runners auto-hospedados proporcionan mayor control y personalización para tu configuración de CI/CD. Este tutorial te guía a través de la configuración y conexión de un runner auto-hospedado en un Mac mini para ejecutar pipelines de macOS.

Prerrequisitos

Antes de comenzar, asegúrate de tener:

  • Un Mac mini (regístrate en Macfleet)
  • Un repositorio de GitHub con derechos de administrador
  • Un gestor de paquetes instalado (preferiblemente Homebrew)
  • Git instalado en tu sistema

Paso 1: Crear una Cuenta de Usuario Dedicada

Primero, crea una cuenta de usuario dedicada para el runner de GitHub Actions:

# Crear la cuenta de usuario 'gh-runner'
sudo dscl . -create /Users/gh-runner
sudo dscl . -create /Users/gh-runner UserShell /bin/bash
sudo dscl . -create /Users/gh-runner RealName "GitHub runner"
sudo dscl . -create /Users/gh-runner UniqueID "1001"
sudo dscl . -create /Users/gh-runner PrimaryGroupID 20
sudo dscl . -create /Users/gh-runner NFSHomeDirectory /Users/gh-runner

# Establecer la contraseña para el usuario
sudo dscl . -passwd /Users/gh-runner tu_contraseña

# Agregar 'gh-runner' al grupo 'admin'
sudo dscl . -append /Groups/admin GroupMembership gh-runner

Cambia a la nueva cuenta de usuario:

su gh-runner

Paso 2: Instalar Software Requerido

Instala Git y Rosetta 2 (si usas Apple Silicon):

# Instalar Git si no está ya instalado
brew install git

# Instalar Rosetta 2 para Macs Apple Silicon
softwareupdate --install-rosetta

Paso 3: Configurar el Runner de GitHub Actions

  1. Ve a tu repositorio de GitHub
  2. Navega a Configuración > Actions > Runners

Runner de GitHub Actions

  1. Haz clic en "New self-hosted runner" (https://github.com/<username>/<repository>/settings/actions/runners/new)
  2. Selecciona macOS como imagen del runner y ARM64 como arquitectura
  3. Sigue los comandos proporcionados para descargar y configurar el runner

Runner de GitHub Actions

Crea un archivo .env en el directorio _work del runner:

# archivo _work/.env
ImageOS=macos15
XCODE_15_DEVELOPER_DIR=/Applications/Xcode.app/Contents/Developer
  1. Ejecuta el script run.sh en tu directorio del runner para completar la configuración.
  2. Verifica que el runner esté activo y escuchando trabajos en la terminal y revisa la configuración del repositorio de GitHub para la asociación del runner y el estado Idle.

Runner de GitHub Actions

Paso 4: Configurar Sudoers (Opcional)

Si tus acciones requieren privilegios de root, configura el archivo sudoers:

sudo visudo

Agrega la siguiente línea:

gh-runner ALL=(ALL) NOPASSWD: ALL

Paso 5: Usar el Runner en Flujos de Trabajo

Configura tu flujo de trabajo de GitHub Actions para usar el runner auto-hospedado:

name: Flujo de trabajo de muestra

on:
  workflow_dispatch:

jobs:
  build:
    runs-on: [self-hosted, macOS, ARM64]
    steps:
      - name: Instalar NodeJS
        run: brew install node

El runner está autenticado en tu repositorio y etiquetado con self-hosted, macOS, y ARM64. Úsalo en tus flujos de trabajo especificando estas etiquetas en el campo runs-on:

runs-on: [self-hosted, macOS, ARM64]

Mejores Prácticas

  • Mantén tu software del runner actualizado
  • Monitorea regularmente los logs del runner para problemas
  • Usa etiquetas específicas para diferentes tipos de runners
  • Implementa medidas de seguridad apropiadas
  • Considera usar múltiples runners para balanceo de carga

Solución de Problemas

Problemas comunes y soluciones:

  1. Runner no conectando:

    • Verifica conectividad de red
    • Verifica validez del token de GitHub
    • Asegúrate de permisos apropiados
  2. Fallas de construcción:

    • Verifica instalación de Xcode
    • Verifica dependencias requeridas
    • Revisa logs del flujo de trabajo
  3. Problemas de permisos:

    • Verifica permisos de usuario
    • Verifica configuración de sudoers
    • Revisa permisos del sistema de archivos

Conclusión

Ahora tienes un runner auto-hospedado de GitHub Actions configurado en tu Mac mini. Esta configuración te proporciona más control sobre tu entorno de CI/CD y te permite ejecutar flujos de trabajo específicos de macOS de manera eficiente.

Recuerda mantener regularmente tu runner y mantenerlo actualizado con los últimos parches de seguridad y versiones de software.

Aplicación Nativa

Aplicación nativa de Macfleet

Guía de Instalación de Macfleet

Macfleet es una solución poderosa de gestión de flota diseñada específicamente para entornos de Mac Mini alojados en la nube. Como proveedor de hosting en la nube de Mac Mini, puedes usar Macfleet para monitorear, gestionar y optimizar toda tu flota de instancias Mac virtualizadas.

Esta guía de instalación te llevará a través de la configuración del monitoreo de Macfleet en sistemas macOS, Windows y Linux para asegurar una supervisión integral de tu infraestructura en la nube.

🍎 macOS

  • Descarga el archivo .dmg para Mac aquí
  • Haz doble clic en el archivo .dmg descargado
  • Arrastra la aplicación Macfleet a la carpeta Aplicaciones
  • Expulsa el archivo .dmg
  • Abre Preferencias del Sistema > Seguridad y Privacidad
    • Pestaña Privacidad > Accesibilidad
    • Marca Macfleet para permitir el monitoreo
  • Inicia Macfleet desde Aplicaciones
  • El seguimiento comienza automáticamente

🪟 Windows

  • Descarga el archivo .exe para Windows aquí
  • Haz clic derecho en el archivo .exe > "Ejecutar como administrador"
  • Sigue el asistente de instalación
  • Acepta los términos y condiciones
  • Permite en Windows Defender si se solicita
  • Concede permisos de monitoreo de aplicaciones
  • Inicia Macfleet desde el Menú Inicio
  • La aplicación comienza el seguimiento automáticamente

🐧 Linux

  • Descarga el paquete .deb (Ubuntu/Debian) o .rpm (CentOS/RHEL) aquí
  • Instala usando tu gestor de paquetes
    • Ubuntu/Debian: sudo dpkg -i Macfleet-linux.deb
    • CentOS/RHEL: sudo rpm -ivh Macfleet-linux.rpm
  • Permite permisos de acceso X11 si se solicita
  • Agrega el usuario a los grupos apropiados si es necesario
  • Inicia Macfleet desde el menú de Aplicaciones
  • La aplicación comienza el seguimiento automáticamente

Nota: Después de la instalación en todos los sistemas, inicia sesión con tus credenciales de Macfleet para sincronizar datos con tu panel de control.