feat: initial repository setup with Git hooks
|
@ -0,0 +1,273 @@
|
|||
# Git Symbolic Links Setup for Blog Content
|
||||
|
||||
## Overview
|
||||
|
||||
This document explains the setup that allows the blog content to be managed in Obsidian while being properly versioned in Git. The setup uses Git hooks to automatically handle symbolic links, ensuring that actual content is committed while maintaining symbolic links for local development.
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Obsidian Vault] -->|Symbolic Links| B[Blog Repository]
|
||||
B -->|Pre-commit Hook| C[Convert to Content]
|
||||
C -->|Commit| D[Git Repository]
|
||||
D -->|Post-commit Hook| E[Restore Symlinks]
|
||||
E -->|Local Development| B
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
### Obsidian Content Location
|
||||
```
|
||||
/mnt/synology/obsidian/Public/Blog/
|
||||
├── posts/
|
||||
├── projects/
|
||||
├── configurations/
|
||||
├── external-posts/
|
||||
├── configs/
|
||||
├── images/
|
||||
├── infrastructure/
|
||||
└── content/
|
||||
```
|
||||
|
||||
### Blog Repository Structure
|
||||
```
|
||||
laforceit-blog/
|
||||
├── src/content/
|
||||
│ ├── posts -> /mnt/synology/obsidian/Public/Blog/posts
|
||||
│ ├── projects -> /mnt/synology/obsidian/Public/Blog/projects
|
||||
│ ├── configurations -> /mnt/synology/obsidian/Public/Blog/configurations
|
||||
│ └── external-posts -> /mnt/synology/obsidian/Public/Blog/external-posts
|
||||
└── public/blog/
|
||||
├── configs -> /mnt/synology/obsidian/Public/Blog/configs
|
||||
├── images -> /mnt/synology/obsidian/Public/Blog/images
|
||||
├── infrastructure -> /mnt/synology/obsidian/Public/Blog/infrastructure
|
||||
└── posts -> /mnt/synology/obsidian/Public/Blog/posts
|
||||
```
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
1. Clone the Blog Repository
|
||||
```bash
|
||||
git clone https://git.argobox.com/KeyArgo/laforceit-blog.git
|
||||
cd laforceit-blog
|
||||
```
|
||||
|
||||
2. Create the Scripts Directory
|
||||
```bash
|
||||
mkdir -p scripts
|
||||
```
|
||||
|
||||
3. Create the Content Processing Script
|
||||
Create `scripts/process-content-links.sh`:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Script to handle symbolic links before commit
|
||||
echo "Processing symbolic links for content..."
|
||||
|
||||
# Array of content directories to process
|
||||
declare -A CONTENT_PATHS
|
||||
# src/content directories
|
||||
CONTENT_PATHS["posts"]="src/content/posts"
|
||||
CONTENT_PATHS["projects"]="src/content/projects"
|
||||
CONTENT_PATHS["configurations"]="src/content/configurations"
|
||||
CONTENT_PATHS["external-posts"]="src/content/external-posts"
|
||||
# public/blog directories
|
||||
CONTENT_PATHS["configs"]="public/blog/configs"
|
||||
CONTENT_PATHS["images"]="public/blog/images"
|
||||
CONTENT_PATHS["infrastructure"]="public/blog/infrastructure"
|
||||
CONTENT_PATHS["blog-posts"]="public/blog/posts"
|
||||
|
||||
for dir_name in "${!CONTENT_PATHS[@]}"; do
|
||||
dir_path="${CONTENT_PATHS[$dir_name]}"
|
||||
if [ -L "$dir_path" ]; then
|
||||
echo "Processing $dir_path..."
|
||||
target=$(readlink "$dir_path")
|
||||
rm "$dir_path"
|
||||
mkdir -p "$(dirname "$dir_path")"
|
||||
cp -r "$target" "$dir_path"
|
||||
git add "$dir_path"
|
||||
echo "Processed $dir_path -> $target"
|
||||
else
|
||||
echo "Skipping $dir_path (not a symbolic link)"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Content processing complete!"
|
||||
```
|
||||
|
||||
4. Create Git Hooks
|
||||
|
||||
### Pre-commit Hook
|
||||
Create `.git/hooks/pre-commit`:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Pre-commit hook to process symbolic links
|
||||
|
||||
echo "Running pre-commit hook for blog content..."
|
||||
|
||||
# Get the absolute path to the script
|
||||
SCRIPT_PATH="$(git rev-parse --show-toplevel)/scripts/process-content-links.sh"
|
||||
|
||||
# Check if the script exists and is executable
|
||||
if [ -x "$SCRIPT_PATH" ]; then
|
||||
bash "$SCRIPT_PATH"
|
||||
# Add any new or changed files resulting from the script
|
||||
git add -A
|
||||
else
|
||||
echo "Error: Content processing script not found or not executable at $SCRIPT_PATH"
|
||||
echo "Please ensure the script exists and has execute permissions"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### Post-commit Hook
|
||||
Create `.git/hooks/post-commit`:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Post-commit hook to restore symbolic links
|
||||
|
||||
echo "Running post-commit hook to restore symbolic links..."
|
||||
|
||||
# Array of content directories and their targets
|
||||
declare -A SYMLINK_TARGETS=(
|
||||
["src/content/posts"]="/mnt/synology/obsidian/Public/Blog/posts"
|
||||
["src/content/projects"]="/mnt/synology/obsidian/Public/Blog/projects"
|
||||
["src/content/configurations"]="/mnt/synology/obsidian/Public/Blog/configurations"
|
||||
["src/content/external-posts"]="/mnt/synology/obsidian/Public/Blog/external-posts"
|
||||
["public/blog/configs"]="/mnt/synology/obsidian/Public/Blog/configs"
|
||||
["public/blog/images"]="/mnt/synology/obsidian/Public/Blog/images"
|
||||
["public/blog/infrastructure"]="/mnt/synology/obsidian/Public/Blog/infrastructure"
|
||||
["public/blog/posts"]="/mnt/synology/obsidian/Public/Blog/posts"
|
||||
)
|
||||
|
||||
for dir_path in "${!SYMLINK_TARGETS[@]}"; do
|
||||
target="${SYMLINK_TARGETS[$dir_path]}"
|
||||
if [ -d "$target" ]; then
|
||||
echo "Restoring symlink for $dir_path -> $target"
|
||||
rm -rf "$dir_path"
|
||||
mkdir -p "$(dirname "$dir_path")"
|
||||
ln -s "$target" "$dir_path"
|
||||
else
|
||||
echo "Warning: Target directory $target does not exist"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Symbolic links restored!"
|
||||
```
|
||||
|
||||
5. Set Proper Permissions
|
||||
```bash
|
||||
chmod +x scripts/process-content-links.sh
|
||||
chmod +x .git/hooks/pre-commit
|
||||
chmod +x .git/hooks/post-commit
|
||||
```
|
||||
|
||||
6. Create Symbolic Links
|
||||
```bash
|
||||
# Create necessary directories
|
||||
mkdir -p src/content public/blog
|
||||
|
||||
# Create symbolic links for src/content
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/posts src/content/posts
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/projects src/content/projects
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/configurations src/content/configurations
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/external-posts src/content/external-posts
|
||||
|
||||
# Create symbolic links for public/blog
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/configs public/blog/configs
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/images public/blog/images
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/infrastructure public/blog/infrastructure
|
||||
ln -s /mnt/synology/obsidian/Public/Blog/posts public/blog/posts
|
||||
```
|
||||
|
||||
## Git Configuration
|
||||
|
||||
1. Configure Git to Handle Symbolic Links
|
||||
```bash
|
||||
git config core.symlinks true
|
||||
```
|
||||
|
||||
2. Create `.gitattributes`
|
||||
```
|
||||
# Handle symbolic links as real content
|
||||
public/blog/* !symlink
|
||||
src/content/* !symlink
|
||||
|
||||
# Treat these directories as regular directories even if they're symlinks
|
||||
public/blog/configs/ -symlink
|
||||
public/blog/images/ -symlink
|
||||
public/blog/infrastructure/ -symlink
|
||||
public/blog/posts/ -symlink
|
||||
src/content/posts/ -symlink
|
||||
src/content/projects/ -symlink
|
||||
src/content/configurations/ -symlink
|
||||
src/content/external-posts/ -symlink
|
||||
|
||||
# Set text files to automatically normalize line endings
|
||||
* text=auto
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Normal Development**
|
||||
- Content is edited in Obsidian
|
||||
- Blog repository uses symbolic links to this content
|
||||
- Local development works seamlessly
|
||||
|
||||
2. **During Commits**
|
||||
- Pre-commit hook activates
|
||||
- Symbolic links are converted to actual content
|
||||
- Content is committed to the repository
|
||||
|
||||
3. **After Commits**
|
||||
- Post-commit hook activates
|
||||
- Symbolic links are restored
|
||||
- Local development continues normally
|
||||
|
||||
## Viewing in Gitea
|
||||
|
||||
Yes, you can view this setup in Gitea:
|
||||
1. The actual content will be visible in the repository
|
||||
2. The `.git/hooks` directory and scripts will be visible
|
||||
3. The symbolic links will appear as regular directories in Gitea
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **Symbolic Links Not Working**
|
||||
- Check file permissions
|
||||
- Verify Obsidian vault location
|
||||
- Ensure Git symlinks are enabled
|
||||
|
||||
2. **Hooks Not Executing**
|
||||
- Check execute permissions on scripts
|
||||
- Verify hook files are in `.git/hooks`
|
||||
- Check script paths are correct
|
||||
|
||||
3. **Content Not Committing**
|
||||
- Check Git configuration
|
||||
- Verify pre-commit hook execution
|
||||
- Check file permissions
|
||||
|
||||
## Maintenance
|
||||
|
||||
1. **Adding New Content Directories**
|
||||
- Add to `CONTENT_PATHS` in process-content-links.sh
|
||||
- Add to `SYMLINK_TARGETS` in post-commit hook
|
||||
- Create corresponding symbolic links
|
||||
|
||||
2. **Changing Obsidian Location**
|
||||
- Update paths in post-commit hook
|
||||
- Update existing symbolic links
|
||||
|
||||
3. **Backup Considerations**
|
||||
- Both Obsidian content and blog repository should be backed up
|
||||
- Keep hooks and scripts in version control
|
||||
|
||||
## Security Notes
|
||||
|
||||
1. Keep sensitive content outside of public blog directories
|
||||
2. Review content before commits
|
||||
3. Use `.gitignore` for private files
|
||||
4. Consider access permissions on Obsidian vault
|
After Width: | Height: | Size: 2.4 MiB |
After Width: | Height: | Size: 2.5 MiB |
After Width: | Height: | Size: 2.6 MiB |
After Width: | Height: | Size: 2.7 MiB |
After Width: | Height: | Size: 1.9 MiB |
After Width: | Height: | Size: 2.2 MiB |
After Width: | Height: | Size: 2.5 MiB |
After Width: | Height: | Size: 2.4 MiB |
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: "High-Availability Kubernetes Setup Guide"
|
||||
description: "Build a resilient multi-node Kubernetes cluster for production workloads"
|
||||
pubDate: "2024-03-20"
|
||||
heroImage: "/blog/images/ha-kubernetes.jpg"
|
||||
category: "Infrastructure"
|
||||
tags: ["kubernetes", "high-availability", "infrastructure", "clustering"]
|
||||
draft: true
|
||||
---
|
||||
|
||||
<div class="draft-indicator">
|
||||
<span class="draft-icon">⚠️</span>
|
||||
<span class="draft-text">This article is a draft and needs to be written</span>
|
||||
</div>
|
||||
|
||||
# High-Availability Kubernetes Setup Guide
|
||||
|
||||
This guide will walk you through setting up a highly available Kubernetes cluster. We'll cover:
|
||||
|
||||
- Architecture planning
|
||||
- Control plane redundancy
|
||||
- etcd cluster setup
|
||||
- Load balancer configuration
|
||||
- Node management
|
||||
- Network configuration
|
||||
- Storage considerations
|
||||
- Backup and disaster recovery
|
||||
- Monitoring and alerting
|
||||
|
||||
[Content coming soon...]
|
||||
|
||||
<style>
|
||||
.draft-indicator {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
background: rgba(234, 179, 8, 0.1);
|
||||
border: 1px solid rgba(234, 179, 8, 0.2);
|
||||
padding: 1rem;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 2rem;
|
||||
}
|
||||
|
||||
.draft-icon {
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
|
||||
.draft-text {
|
||||
color: rgb(234, 179, 8);
|
||||
font-weight: 500;
|
||||
}
|
||||
</style>
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: "NAS Integration with Kubernetes Guide"
|
||||
description: "Learn how to integrate your NAS storage with Kubernetes for persistent data"
|
||||
pubDate: "2024-03-20"
|
||||
heroImage: "/blog/images/nas-integration.jpg"
|
||||
category: "Infrastructure"
|
||||
tags: ["kubernetes", "storage", "nas", "infrastructure"]
|
||||
draft: true
|
||||
---
|
||||
|
||||
<div class="draft-indicator">
|
||||
<span class="draft-icon">⚠️</span>
|
||||
<span class="draft-text">This article is a draft and needs to be written</span>
|
||||
</div>
|
||||
|
||||
# NAS Integration with Kubernetes Guide
|
||||
|
||||
This guide will walk you through integrating your NAS storage with Kubernetes. We'll cover:
|
||||
|
||||
- NFS server setup
|
||||
- Storage class configuration
|
||||
- Persistent volume claims
|
||||
- Dynamic provisioning
|
||||
- Backup strategies
|
||||
- Performance optimization
|
||||
- Security considerations
|
||||
- Monitoring and maintenance
|
||||
- Common troubleshooting
|
||||
|
||||
[Content coming soon...]
|
||||
|
||||
<style>
|
||||
.draft-indicator {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
background: rgba(234, 179, 8, 0.1);
|
||||
border: 1px solid rgba(234, 179, 8, 0.2);
|
||||
padding: 1rem;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 2rem;
|
||||
}
|
||||
|
||||
.draft-icon {
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
|
||||
.draft-text {
|
||||
color: rgb(234, 179, 8);
|
||||
font-weight: 500;
|
||||
}
|
||||
</style>
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: "Home Lab Network Security Guide"
|
||||
description: "Implement comprehensive network security measures for your home lab"
|
||||
pubDate: "2024-03-20"
|
||||
heroImage: "/blog/images/network-security.jpg"
|
||||
category: "Infrastructure"
|
||||
tags: ["security", "networking", "infrastructure", "firewall"]
|
||||
draft: true
|
||||
---
|
||||
|
||||
<div class="draft-indicator">
|
||||
<span class="draft-icon">⚠️</span>
|
||||
<span class="draft-text">This article is a draft and needs to be written</span>
|
||||
</div>
|
||||
|
||||
# Home Lab Network Security Guide
|
||||
|
||||
This guide will walk you through implementing robust network security for your home lab. We'll cover:
|
||||
|
||||
- Network segmentation with VLANs
|
||||
- Firewall configuration
|
||||
- Intrusion Detection/Prevention
|
||||
- Network monitoring
|
||||
- Access control policies
|
||||
- SSL/TLS implementation
|
||||
- Security logging
|
||||
- Vulnerability scanning
|
||||
- Incident response
|
||||
- Security best practices
|
||||
|
||||
[Content coming soon...]
|
||||
|
||||
<style>
|
||||
.draft-indicator {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
background: rgba(234, 179, 8, 0.1);
|
||||
border: 1px solid rgba(234, 179, 8, 0.2);
|
||||
padding: 1rem;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 2rem;
|
||||
}
|
||||
|
||||
.draft-icon {
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
|
||||
.draft-text {
|
||||
color: rgb(234, 179, 8);
|
||||
font-weight: 500;
|
||||
}
|
||||
</style>
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
title: "Secure VPN Access Setup Guide"
|
||||
description: "Set up a secure VPN solution for remote access to your home lab"
|
||||
pubDate: "2024-03-20"
|
||||
heroImage: "/blog/images/secure-vpn.jpg"
|
||||
category: "Infrastructure"
|
||||
tags: ["security", "vpn", "networking", "infrastructure"]
|
||||
draft: true
|
||||
---
|
||||
|
||||
<div class="draft-indicator">
|
||||
<span class="draft-icon">⚠️</span>
|
||||
<span class="draft-text">This article is a draft and needs to be written</span>
|
||||
</div>
|
||||
|
||||
# Secure VPN Access Setup Guide
|
||||
|
||||
This guide will walk you through setting up secure VPN access to your home lab. We'll cover:
|
||||
|
||||
- VPN server selection and setup
|
||||
- Certificate management
|
||||
- Network configuration
|
||||
- Client setup
|
||||
- Split tunneling
|
||||
- Access control
|
||||
- Monitoring and logging
|
||||
- Security best practices
|
||||
- Troubleshooting common issues
|
||||
|
||||
[Content coming soon...]
|
||||
|
||||
<style>
|
||||
.draft-indicator {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
background: rgba(234, 179, 8, 0.1);
|
||||
border: 1px solid rgba(234, 179, 8, 0.2);
|
||||
padding: 1rem;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 2rem;
|
||||
}
|
||||
|
||||
.draft-icon {
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
|
||||
.draft-text {
|
||||
color: rgb(234, 179, 8);
|
||||
font-weight: 500;
|
||||
}
|
||||
</style>
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: "Blog Posts Collection"
|
||||
description: "Documentation for blog posts"
|
||||
pubDate: 2025-04-18
|
||||
draft: true
|
||||
---
|
||||
|
||||
# Blog Posts Collection
|
||||
|
||||
This directory contains blog posts for the LaForceIT digital garden.
|
||||
|
||||
## Content Guidelines
|
||||
|
||||
- All posts should include proper frontmatter
|
||||
- Use Markdown for formatting content
|
||||
- Images should be placed in the public/blog/images directory
|
||||
|
||||
## Frontmatter Requirements
|
||||
|
||||
Every post needs at minimum:
|
||||
|
||||
```
|
||||
---
|
||||
title: "Post Title"
|
||||
pubDate: YYYY-MM-DD
|
||||
---
|
||||
```
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
title: Secure Remote Access with Cloudflare Tunnels
|
||||
description: How to set up Cloudflare Tunnels for secure remote access to your home lab services
|
||||
pubDate: Jul 22 2023
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: networking
|
||||
tags:
|
||||
- cloudflare
|
||||
- networking
|
||||
- security
|
||||
- homelab
|
||||
- tunnels
|
||||
readTime: "7 min read"
|
||||
---
|
|
@ -0,0 +1,180 @@
|
|||
---
|
||||
title: Secure Remote Access with Cloudflare Tunnels
|
||||
description: How to set up Cloudflare Tunnels for secure remote access to your home lab services
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: networking
|
||||
tags:
|
||||
- cloudflare
|
||||
- networking
|
||||
- security
|
||||
- homelab
|
||||
- tunnels
|
||||
readTime: 7 min read
|
||||
---
|
||||
|
||||
# Secure Remote Access with Cloudflare Tunnels
|
||||
|
||||
Cloudflare Tunnels provide a secure way to expose your locally hosted applications and services to the internet without opening ports on your firewall or requiring a static IP address. This guide will show you how to set up Cloudflare Tunnels to securely access your home lab services from anywhere.
|
||||
|
||||
## Why Use Cloudflare Tunnels?
|
||||
|
||||
- **Security**: No need to open ports on your firewall
|
||||
- **Simplicity**: Works behind CGNAT, dynamic IPs, and complex network setups
|
||||
- **Performance**: Traffic routed through Cloudflare's global network
|
||||
- **Zero Trust**: Integrate with Cloudflare Access for authentication
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A Cloudflare account
|
||||
- A domain managed by Cloudflare
|
||||
- Docker installed (for containerized deployment)
|
||||
- Services you want to expose (e.g., web apps, SSH, etc.)
|
||||
|
||||
## Setting Up Cloudflare Tunnels
|
||||
|
||||
### 1. Install cloudflared
|
||||
|
||||
You can install cloudflared using Docker:
|
||||
|
||||
```bash
|
||||
docker pull cloudflare/cloudflared:latest
|
||||
```
|
||||
|
||||
Or directly on your system:
|
||||
|
||||
```bash
|
||||
# For Debian/Ubuntu
|
||||
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb -o cloudflared.deb
|
||||
sudo dpkg -i cloudflared.deb
|
||||
|
||||
# For other systems, visit: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/installation
|
||||
```
|
||||
|
||||
### 2. Authenticate cloudflared
|
||||
|
||||
Run the following command to authenticate:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel login
|
||||
```
|
||||
|
||||
This will open a browser window where you'll need to log in to your Cloudflare account and select the domain you want to use with the tunnel.
|
||||
|
||||
### 3. Create a Tunnel
|
||||
|
||||
Create a new tunnel with a meaningful name:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel create homelab
|
||||
```
|
||||
|
||||
This will generate a tunnel ID and credentials file at `~/.cloudflared/`.
|
||||
|
||||
### 4. Configure your Tunnel
|
||||
|
||||
Create a config file at `~/.cloudflared/config.yml`:
|
||||
|
||||
```yaml
|
||||
tunnel: <TUNNEL_ID>
|
||||
credentials-file: /root/.cloudflared/<TUNNEL_ID>.json
|
||||
|
||||
ingress:
|
||||
# Dashboard application
|
||||
- hostname: dashboard.yourdomain.com
|
||||
service: http://localhost:8080
|
||||
|
||||
# Grafana service
|
||||
- hostname: grafana.yourdomain.com
|
||||
service: http://localhost:3000
|
||||
|
||||
# SSH service
|
||||
- hostname: ssh.yourdomain.com
|
||||
service: ssh://localhost:22
|
||||
|
||||
# Catch-all rule, which responds with 404
|
||||
- service: http_status:404
|
||||
```
|
||||
|
||||
### 5. Route Traffic to Your Tunnel
|
||||
|
||||
Configure DNS records to route traffic to your tunnel:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel route dns homelab dashboard.yourdomain.com
|
||||
cloudflared tunnel route dns homelab grafana.yourdomain.com
|
||||
cloudflared tunnel route dns homelab ssh.yourdomain.com
|
||||
```
|
||||
|
||||
### 6. Start the Tunnel
|
||||
|
||||
Run the tunnel:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel run homelab
|
||||
```
|
||||
|
||||
For production deployments, you'll want to set up cloudflared as a service:
|
||||
|
||||
```bash
|
||||
# For systemd-based systems
|
||||
sudo cloudflared service install
|
||||
sudo systemctl start cloudflared
|
||||
```
|
||||
|
||||
## Docker Compose Example
|
||||
|
||||
For a containerized deployment, create a `docker-compose.yml` file:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
cloudflared:
|
||||
image: cloudflare/cloudflared:latest
|
||||
container_name: cloudflared
|
||||
restart: unless-stopped
|
||||
command: tunnel run
|
||||
environment:
|
||||
- TUNNEL_TOKEN=your_tunnel_token
|
||||
volumes:
|
||||
- ~/.cloudflared:/etc/cloudflared
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Store your credentials file safely; it provides full access to your tunnel
|
||||
- Consider using Cloudflare Access for additional authentication
|
||||
- Regularly rotate credentials and update cloudflared
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Zero Trust Access
|
||||
|
||||
You can integrate Cloudflare Tunnels with Cloudflare Access to require authentication:
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
- hostname: dashboard.yourdomain.com
|
||||
service: http://localhost:8080
|
||||
originRequest:
|
||||
noTLSVerify: true
|
||||
```
|
||||
|
||||
Then, create an Access application in the Cloudflare Zero Trust dashboard to protect this hostname.
|
||||
|
||||
### Health Checks
|
||||
|
||||
Configure health checks to ensure your services are running:
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
- hostname: dashboard.yourdomain.com
|
||||
service: http://localhost:8080
|
||||
originRequest:
|
||||
healthCheckEnabled: true
|
||||
healthCheckPath: /health
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Cloudflare Tunnels provide a secure, reliable way to access your home lab services remotely without exposing your home network to the internet. With the setup described in this guide, you can securely access your services from anywhere in the world.
|
|
@ -0,0 +1,206 @@
|
|||
---
|
||||
title: Setting Up FileBrowser for Self-Hosted File Management
|
||||
description: A step-by-step guide to deploying and configuring FileBrowser for secure, user-friendly file management in your home lab environment.
|
||||
pubDate: 2025-04-19
|
||||
updatedDate: 2025-04-18
|
||||
category: Services
|
||||
tags:
|
||||
- filebrowser
|
||||
- self-hosted
|
||||
- kubernetes
|
||||
- docker
|
||||
- file-management
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
I've said it before, and I'll say it again - the journey to a well-organized digital life begins with proper file management. If you're like me, you've got files scattered across multiple devices, cloud services, and servers. What if I told you there's a lightweight, sleek solution that puts you back in control without relying on third-party services?
|
||||
|
||||
Enter [FileBrowser](https://filebrowser.org/), a simple yet powerful self-hosted file management interface that I've been using in my home lab for the past few months. Let me show you how to set it up and some cool ways I'm using it.
|
||||
|
||||
## What is FileBrowser?
|
||||
|
||||
FileBrowser is an open-source, single binary file manager with a clean web interface that lets you:
|
||||
|
||||
- Access and manage files from any device with a browser
|
||||
- Share files with customizable permissions
|
||||
- Edit files directly in the browser
|
||||
- Perform basic file operations (copy, move, delete, upload, download)
|
||||
- Search through your files and folders
|
||||
|
||||
The best part? It's lightweight (< 20MB), written in Go, and runs on pretty much anything - from a Raspberry Pi to your Kubernetes cluster.
|
||||
|
||||
## Getting Started with FileBrowser
|
||||
|
||||
### Option 1: Docker Deployment
|
||||
|
||||
For the Docker enthusiasts (like me), here's how to get FileBrowser up and running in seconds:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name filebrowser \
|
||||
-v /path/to/your/files:/srv \
|
||||
-v /path/to/filebrowser/database:/database \
|
||||
-e PUID=$(id -u) \
|
||||
-e PGID=$(id -g) \
|
||||
-p 8080:80 \
|
||||
filebrowser/filebrowser:latest
|
||||
```
|
||||
|
||||
This will start FileBrowser on port 8080, with your files mounted at `/srv` inside the container.
|
||||
|
||||
### Option 2: Kubernetes Deployment with Helm
|
||||
|
||||
For my fellow Kubernetes fanatics, here's a simple Helm chart deployment:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: filebrowser
|
||||
labels:
|
||||
app: filebrowser
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: filebrowser
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: filebrowser
|
||||
spec:
|
||||
containers:
|
||||
- name: filebrowser
|
||||
image: filebrowser/filebrowser:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /database
|
||||
- name: data
|
||||
mountPath: /srv
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: filebrowser-config
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: filebrowser-data
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: filebrowser
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: filebrowser
|
||||
```
|
||||
|
||||
Don't forget to create the necessary PVCs for your configuration and data.
|
||||
|
||||
## Configuring FileBrowser
|
||||
|
||||
Once you have FileBrowser running, you can access it at `http://your-server:8080`. The default credentials are:
|
||||
|
||||
- Username: `admin`
|
||||
- Password: `admin`
|
||||
|
||||
**Pro tip**: Change these immediately! You can do this through the UI or by using the FileBrowser CLI.
|
||||
|
||||
### Custom Configuration
|
||||
|
||||
You can customize FileBrowser by modifying the configuration file. Here's what my config looks like:
|
||||
|
||||
```json
|
||||
{
|
||||
"port": 80,
|
||||
"baseURL": "",
|
||||
"address": "",
|
||||
"log": "stdout",
|
||||
"database": "/database/filebrowser.db",
|
||||
"root": "/srv",
|
||||
"auth": {
|
||||
"method": "json",
|
||||
"header": ""
|
||||
},
|
||||
"branding": {
|
||||
"name": "LaForceIT Files",
|
||||
"disableExternal": false,
|
||||
"files": "",
|
||||
"theme": "dark"
|
||||
},
|
||||
"cors": {
|
||||
"enabled": false,
|
||||
"credentials": false,
|
||||
"allowedHosts": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Securing FileBrowser
|
||||
|
||||
Security is crucial, especially when hosting a file manager. Here's how I secure my FileBrowser instance:
|
||||
|
||||
1. **Reverse Proxy**: I put FileBrowser behind a reverse proxy (Traefik) with SSL encryption.
|
||||
|
||||
2. **Authentication**: I've integrated with my Authelia setup for SSO across my services.
|
||||
|
||||
3. **User Isolation**: I create separate users with their own root directories to keep things isolated.
|
||||
|
||||
Here's a sample Traefik configuration for FileBrowser:
|
||||
|
||||
```yaml
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: filebrowser
|
||||
namespace: default
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`files.yourdomain.com`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: filebrowser
|
||||
port: 80
|
||||
middlewares:
|
||||
- name: auth-middleware
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
```
|
||||
|
||||
## My Top 5 FileBrowser Use Cases
|
||||
|
||||
1. **Home Media Management**: I organize my photos, music, and video collections.
|
||||
|
||||
2. **Document Repository**: A central place for important documents that I can access from anywhere.
|
||||
|
||||
3. **Code Snippet Library**: I keep commonly used code snippets organized by language and project.
|
||||
|
||||
4. **Backup Verification**: An easy way to browse my automated backups to verify they're working.
|
||||
|
||||
5. **Sharing Files**: When I need to share large files with friends or family, I create a temporary user with limited access.
|
||||
|
||||
## Power User Tips
|
||||
|
||||
Here are some tricks I've learned along the way:
|
||||
|
||||
- **Keyboard Shortcuts**: Press `?` in the UI to see all available shortcuts.
|
||||
- **Custom Branding**: Personalize the look and feel by setting a custom name and logo in the config.
|
||||
- **Multiple Instances**: Run multiple instances for different purposes (e.g., one for media, one for documents).
|
||||
- **Command Runner**: Use the built-in command runner to execute shell scripts on your server.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
FileBrowser has become an essential part of my home lab setup. It's lightweight, fast, and just gets the job done without unnecessary complexity. Whether you're a home lab enthusiast or just looking for a simple way to manage your files, FileBrowser is worth checking out.
|
||||
|
||||
What file management solution are you using? Let me know in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on December 15, 2023 with the latest FileBrowser configuration options and security recommendations._
|
|
@ -0,0 +1,332 @@
|
|||
---
|
||||
title: "Self-Hosting Git with Gitea: Your Own GitHub Alternative"
|
||||
description: A comprehensive guide to setting up Gitea - a lightweight, self-hosted Git service that gives you full control over your code repositories.
|
||||
pubDate: 2025-04-19
|
||||
updatedDate: 2025-04-18
|
||||
category: Services
|
||||
tags:
|
||||
- gitea
|
||||
- git
|
||||
- self-hosted
|
||||
- devops
|
||||
- kubernetes
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
If you're a developer like me who values ownership and privacy, you've probably wondered if there's a way to get the convenience of GitHub or GitLab without handing over your code to a third party. Enter Gitea - a painless, self-hosted Git service written in Go that I've been using for my personal projects for the past year.
|
||||
|
||||
Let me walk you through setting up your own Gitea instance and show you why it might be the perfect addition to your development workflow.
|
||||
|
||||
## Why Gitea?
|
||||
|
||||
First, let's talk about why you might want to run your own Git server:
|
||||
|
||||
- **Complete control**: Your code, your server, your rules.
|
||||
- **Privacy**: Keep sensitive projects completely private.
|
||||
- **No limits**: Create as many private repositories as you want.
|
||||
- **Lightweight**: Gitea runs smoothly on minimal hardware (even a Raspberry Pi).
|
||||
- **GitHub-like experience**: Familiar interface with issues, pull requests, and more.
|
||||
|
||||
I've tried several self-hosted Git solutions, but Gitea strikes the perfect balance between features and simplicity. It's like the Goldilocks of Git servers - not too heavy, not too light, just right.
|
||||
|
||||
## Getting Started with Gitea
|
||||
|
||||
### Option 1: Docker Installation
|
||||
|
||||
The easiest way to get started with Gitea is using Docker. Here's a simple `docker-compose.yml` file to get you up and running:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
gitea:
|
||||
image: gitea/gitea:latest
|
||||
container_name: gitea
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- GITEA__database__DB_TYPE=postgres
|
||||
- GITEA__database__HOST=db:5432
|
||||
- GITEA__database__NAME=gitea
|
||||
- GITEA__database__USER=gitea
|
||||
- GITEA__database__PASSWD=gitea_password
|
||||
restart: always
|
||||
volumes:
|
||||
- ./gitea:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "3000:3000"
|
||||
- "222:22"
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
- gitea
|
||||
|
||||
db:
|
||||
image: postgres:14
|
||||
container_name: gitea-db
|
||||
restart: always
|
||||
environment:
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD=gitea_password
|
||||
- POSTGRES_DB=gitea
|
||||
volumes:
|
||||
- ./postgres:/var/lib/postgresql/data
|
||||
networks:
|
||||
- gitea
|
||||
|
||||
networks:
|
||||
gitea:
|
||||
external: false
|
||||
```
|
||||
|
||||
Save this file and run:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Your Gitea instance will be available at `http://localhost:3000`.
|
||||
|
||||
### Option 2: Kubernetes Deployment
|
||||
|
||||
For those running a Kubernetes cluster (like me), here's a basic manifest to deploy Gitea:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: gitea
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
containers:
|
||||
- name: gitea
|
||||
image: gitea/gitea:latest
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
- containerPort: 22
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
env:
|
||||
- name: USER_UID
|
||||
value: "1000"
|
||||
- name: USER_GID
|
||||
value: "1000"
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: gitea-data
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 3000
|
||||
targetPort: 3000
|
||||
name: web
|
||||
- port: 22
|
||||
targetPort: 22
|
||||
name: ssh
|
||||
selector:
|
||||
app: gitea
|
||||
```
|
||||
|
||||
Apply it with:
|
||||
|
||||
```bash
|
||||
kubectl apply -f gitea.yaml
|
||||
```
|
||||
|
||||
## Initial Configuration
|
||||
|
||||
After installation, you'll be greeted with Gitea's setup page. Here are the settings I recommend:
|
||||
|
||||
1. **Database Settings**: If you followed the Docker Compose example, your database is already configured.
|
||||
|
||||
2. **General Settings**:
|
||||
- Set your site title (e.g., "LaForceIT Git")
|
||||
- Disable user registration unless you're hosting for multiple people
|
||||
- Enable caching to improve performance
|
||||
|
||||
3. **Admin Account**: Create your admin user with a strong password.
|
||||
|
||||
My configuration looks something like this:
|
||||
|
||||
```ini
|
||||
[server]
|
||||
DOMAIN = git.laforce.it
|
||||
SSH_DOMAIN = git.laforce.it
|
||||
ROOT_URL = https://git.laforce.it/
|
||||
DISABLE_SSH = false
|
||||
SSH_PORT = 22
|
||||
|
||||
[service]
|
||||
DISABLE_REGISTRATION = true
|
||||
REQUIRE_SIGNIN_VIEW = true
|
||||
|
||||
[security]
|
||||
INSTALL_LOCK = true
|
||||
```
|
||||
|
||||
## Integrating with Your Development Workflow
|
||||
|
||||
Now that Gitea is running, here's how I integrate it into my workflow:
|
||||
|
||||
### 1. Adding Your SSH Key
|
||||
|
||||
First, add your SSH key to Gitea:
|
||||
|
||||
1. Go to Settings > SSH / GPG Keys
|
||||
2. Click "Add Key"
|
||||
3. Paste your public key and give it a name
|
||||
|
||||
### 2. Creating Your First Repository
|
||||
|
||||
1. Click the "+" button in the top right
|
||||
2. Select "New Repository"
|
||||
3. Fill in the details and initialize with a README if desired
|
||||
|
||||
### 3. Working with Your Repository
|
||||
|
||||
To clone your new repository:
|
||||
|
||||
```bash
|
||||
git clone git@your-gitea-server:username/repo-name.git
|
||||
```
|
||||
|
||||
Now you can work with it just like any Git repository:
|
||||
|
||||
```bash
|
||||
cd repo-name
|
||||
echo "# My awesome project" > README.md
|
||||
git add README.md
|
||||
git commit -m "Update README"
|
||||
git push origin main
|
||||
```
|
||||
|
||||
## Advanced Gitea Features
|
||||
|
||||
Gitea isn't just a basic Git server - it has several powerful features that I use daily:
|
||||
|
||||
### CI/CD with Gitea Actions
|
||||
|
||||
Gitea recently added support for Actions, which are compatible with GitHub Actions workflows. Here's a simple example:
|
||||
|
||||
```yaml
|
||||
name: Go Build
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: '1.20'
|
||||
- name: Build
|
||||
run: go build -v ./...
|
||||
- name: Test
|
||||
run: go test -v ./...
|
||||
```
|
||||
|
||||
### Webhooks for Integration
|
||||
|
||||
I use webhooks to integrate Gitea with my deployment pipeline. Here's how to set up a simple webhook:
|
||||
|
||||
1. Navigate to your repository
|
||||
2. Go to Settings > Webhooks > Add Webhook
|
||||
3. Select "Gitea" or "Custom" depending on your needs
|
||||
4. Enter the URL of your webhook receiver
|
||||
5. Choose which events trigger the webhook
|
||||
|
||||
### Mirror Repositories
|
||||
|
||||
One of my favorite features is repository mirroring. I use this to keep a backup of important GitHub repositories:
|
||||
|
||||
1. Create a new repository
|
||||
2. Go to Settings > Mirror Settings
|
||||
3. Enter the URL of the repository you want to mirror
|
||||
4. Set the sync interval
|
||||
|
||||
## Security Considerations
|
||||
|
||||
When self-hosting any service, security is a top priority. Here's how I secure my Gitea instance:
|
||||
|
||||
1. **Reverse Proxy**: I put Gitea behind Traefik with automatic SSL certificates.
|
||||
|
||||
2. **2FA**: Enable two-factor authentication for your admin account.
|
||||
|
||||
3. **Regular Backups**: I back up both the Gitea data directory and the database daily.
|
||||
|
||||
4. **Updates**: Keep Gitea updated to the latest version to get security fixes.
|
||||
|
||||
Here's a sample Traefik configuration for Gitea:
|
||||
|
||||
```yaml
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: gitea
|
||||
namespace: default
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`git.yourdomain.com`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: gitea
|
||||
port: 3000
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
```
|
||||
|
||||
## Why I Switched from GitHub to Gitea
|
||||
|
||||
People often ask me why I bother with self-hosting when GitHub offers so much for free. Here are my reasons:
|
||||
|
||||
1. **Ownership**: No sudden changes in terms of service affecting my workflow.
|
||||
2. **Privacy**: Some projects aren't meant for public hosting.
|
||||
3. **Learning**: Managing my own services teaches me valuable skills.
|
||||
4. **Integration**: It fits perfectly with my other self-hosted services.
|
||||
5. **Performance**: Local Git operations are lightning-fast.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
Gitea has been a fantastic addition to my self-hosted infrastructure. It's reliable, lightweight, and provides all the features I need without the complexity of larger solutions like GitLab.
|
||||
|
||||
Whether you're a privacy enthusiast, a homelab tinkerer, or just someone who wants complete control over your code, Gitea is worth considering. The setup is straightforward, and the rewards are significant.
|
||||
|
||||
What about you? Are you self-hosting any of your development tools? Let me know in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on January 18, 2024 with information about Gitea Actions and the latest configuration options._
|
|
@ -0,0 +1,169 @@
|
|||
---
|
||||
title: GitOps with Flux CD
|
||||
description: Implementing GitOps workflows on Kubernetes using Flux CD
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: devops
|
||||
tags:
|
||||
- kubernetes
|
||||
- gitops
|
||||
- flux
|
||||
- ci-cd
|
||||
- automation
|
||||
readTime: 10 min read
|
||||
---
|
||||
|
||||
# GitOps with Flux CD
|
||||
|
||||
GitOps is revolutionizing the way teams deploy and manage applications on Kubernetes. This guide will walk you through implementing a GitOps workflow using Flux CD, an open-source continuous delivery tool.
|
||||
|
||||
## What is GitOps?
|
||||
|
||||
GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation.
|
||||
|
||||
With GitOps:
|
||||
- Git is the single source of truth for the desired state of your infrastructure
|
||||
- Changes to the desired state are declarative and version controlled
|
||||
- Approved changes are automatically applied to your infrastructure
|
||||
|
||||
## Why Flux CD?
|
||||
|
||||
Flux CD is a GitOps tool that ensures that your Kubernetes cluster matches the desired state specified in a Git repository. Key features include:
|
||||
|
||||
- Automated sync between your Git repository and cluster state
|
||||
- Support for Kustomize, Helm, and plain Kubernetes manifests
|
||||
- Multi-tenancy via RBAC
|
||||
- Strong security practices, including image verification
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- A Kubernetes cluster (K3s, Kind, or any other distribution)
|
||||
- kubectl configured to access your cluster
|
||||
- A GitHub (or GitLab/Bitbucket) account and repository
|
||||
|
||||
### Installing Flux
|
||||
|
||||
1. Install the Flux CLI:
|
||||
|
||||
```bash
|
||||
curl -s https://fluxcd.io/install.sh | sudo bash
|
||||
```
|
||||
|
||||
2. Export your GitHub personal access token:
|
||||
|
||||
```bash
|
||||
export GITHUB_TOKEN=<your-token>
|
||||
```
|
||||
|
||||
3. Bootstrap Flux:
|
||||
|
||||
```bash
|
||||
flux bootstrap github \
|
||||
--owner=<your-github-username> \
|
||||
--repository=<repository-name> \
|
||||
--path=clusters/my-cluster \
|
||||
--personal
|
||||
```
|
||||
|
||||
## Setting Up Your First Application
|
||||
|
||||
1. Create a basic directory structure in your Git repository:
|
||||
|
||||
```
|
||||
└── clusters/
|
||||
└── my-cluster/
|
||||
├── flux-system/ # Created by bootstrap
|
||||
└── apps/
|
||||
└── podinfo/
|
||||
├── namespace.yaml
|
||||
├── deployment.yaml
|
||||
└── service.yaml
|
||||
```
|
||||
|
||||
2. Create a Flux Kustomization to deploy your app:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 5m0s
|
||||
path: ./clusters/my-cluster/apps/podinfo
|
||||
prune: true
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
```
|
||||
|
||||
3. Commit and push your changes, and Flux will automatically deploy your application!
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Automated Image Updates
|
||||
|
||||
Flux can automatically update your deployments when new images are available:
|
||||
|
||||
```yaml
|
||||
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
||||
kind: ImageRepository
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: flux-system
|
||||
spec:
|
||||
image: ghcr.io/stefanprodan/podinfo
|
||||
interval: 1m0s
|
||||
---
|
||||
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
||||
kind: ImagePolicy
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: flux-system
|
||||
spec:
|
||||
imageRepositoryRef:
|
||||
name: podinfo
|
||||
policy:
|
||||
semver:
|
||||
range: 6.x.x
|
||||
```
|
||||
|
||||
### Working with Helm Charts
|
||||
|
||||
Flux makes it easy to manage Helm releases:
|
||||
|
||||
```yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: HelmRepository
|
||||
metadata:
|
||||
name: bitnami
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 30m
|
||||
url: https://charts.bitnami.com/bitnami
|
||||
---
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: redis
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 5m
|
||||
chart:
|
||||
spec:
|
||||
chart: redis
|
||||
version: "16.x"
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: bitnami
|
||||
values:
|
||||
architecture: standalone
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Flux CD provides a powerful, secure, and flexible platform for implementing GitOps workflows. By following this guide, you'll be well on your way to managing your Kubernetes infrastructure using GitOps principles.
|
||||
|
||||
Stay tuned for more advanced GitOps patterns and best practices!
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Setting Up a K3s Kubernetes Cluster
|
||||
pubDate: 2025-04-19
|
||||
description: A comprehensive guide to setting up and configuring a lightweight K3s Kubernetes cluster for your home lab
|
||||
category: Infrastructure
|
||||
tags:
|
||||
- kubernetes
|
||||
- k3s
|
||||
- infrastructure
|
||||
- homelab
|
||||
- containers
|
||||
draft: false
|
||||
heroImage:
|
||||
---
|
||||
|
||||
# Setting Up a K3s Kubernetes Cluster
|
||||
|
||||
Coming soon...
|
|
@ -0,0 +1,90 @@
|
|||
---
|
||||
title: K3s Installation Guide
|
||||
description: A comprehensive guide to installing and configuring K3s for your home lab
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/k3installation.png
|
||||
category: kubernetes
|
||||
tags:
|
||||
- kubernetes
|
||||
- k3s
|
||||
- homelab
|
||||
- tutorial
|
||||
readTime: 8 min read
|
||||
---
|
||||
|
||||
# K3s Installation Guide
|
||||
|
||||
K3s is a lightweight Kubernetes distribution designed for resource-constrained environments, perfect for home labs and edge computing. This guide will walk you through the installation process from start to finish.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A machine running Linux (Ubuntu 20.04+ recommended)
|
||||
- Minimum 1GB RAM (2GB+ recommended for multi-workload clusters)
|
||||
- 20GB+ of disk space
|
||||
- Root access or sudo privileges
|
||||
|
||||
## Basic Installation
|
||||
|
||||
The simplest way to install K3s is using the installation script:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
This will install K3s as a server and start the service. To verify it's running:
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl get node
|
||||
```
|
||||
|
||||
## Advanced Installation Options
|
||||
|
||||
### Master Node with Custom Options
|
||||
|
||||
For more control, you can use environment variables:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
|
||||
```
|
||||
|
||||
### Adding Worker Nodes
|
||||
|
||||
1. On your master node, get the node token:
|
||||
|
||||
```bash
|
||||
sudo cat /var/lib/rancher/k3s/server/node-token
|
||||
```
|
||||
|
||||
2. On each worker node, run:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://<master-node-ip>:6443 K3S_TOKEN=<node-token> sh -
|
||||
```
|
||||
|
||||
## Accessing the Cluster
|
||||
|
||||
After installation, the kubeconfig file is written to `/etc/rancher/k3s/k3s.yaml`. To use kubectl externally:
|
||||
|
||||
1. Copy this file to your local machine
|
||||
2. Set the KUBECONFIG environment variable:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=/path/to/k3s.yaml
|
||||
```
|
||||
|
||||
## Uninstalling K3s
|
||||
|
||||
If you need to uninstall:
|
||||
|
||||
- On server nodes: `/usr/local/bin/k3s-uninstall.sh`
|
||||
- On agent nodes: `/usr/local/bin/k3s-agent-uninstall.sh`
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you have K3s running, you might want to:
|
||||
|
||||
- Deploy your first application
|
||||
- Set up persistent storage
|
||||
- Configure ingress for external access
|
||||
|
||||
Stay tuned for more guides on these topics!
|
|
@ -0,0 +1,289 @@
|
|||
---
|
||||
title: Setting Up Prometheus Monitoring in Kubernetes
|
||||
description: A comprehensive guide to implementing Prometheus monitoring in your Kubernetes cluster
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: devops
|
||||
tags:
|
||||
- kubernetes
|
||||
- monitoring
|
||||
- prometheus
|
||||
- grafana
|
||||
- observability
|
||||
readTime: 9 min read
|
||||
---
|
||||
|
||||
# Setting Up Prometheus Monitoring in Kubernetes
|
||||
|
||||
Effective monitoring is crucial for maintaining a healthy Kubernetes environment. Prometheus has become the de facto standard for metrics collection and alerting in cloud-native environments. This guide will walk you through setting up a complete Prometheus monitoring stack in your Kubernetes cluster.
|
||||
|
||||
## Why Prometheus?
|
||||
|
||||
Prometheus offers several advantages for Kubernetes monitoring:
|
||||
|
||||
- **Pull-based architecture**: Simplifies configuration and security
|
||||
- **Powerful query language (PromQL)**: For flexible data analysis
|
||||
- **Service discovery**: Automatically finds targets in dynamic environments
|
||||
- **Rich ecosystem**: Wide range of exporters and integrations
|
||||
- **CNCF graduated project**: Strong community and vendor support
|
||||
|
||||
## Components of the Monitoring Stack
|
||||
|
||||
We'll set up a complete monitoring stack consisting of:
|
||||
|
||||
1. **Prometheus**: Core metrics collection and storage
|
||||
2. **Alertmanager**: Handles alerts and notifications
|
||||
3. **Grafana**: Visualization and dashboards
|
||||
4. **Node Exporter**: Collects host-level metrics
|
||||
5. **kube-state-metrics**: Collects Kubernetes state metrics
|
||||
6. **Prometheus Operator**: Simplifies Prometheus management in Kubernetes
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A running Kubernetes cluster (K3s, EKS, GKE, etc.)
|
||||
- kubectl configured to access your cluster
|
||||
- Helm 3 installed
|
||||
|
||||
## Installation Using Helm
|
||||
|
||||
The easiest way to deploy Prometheus is using the kube-prometheus-stack Helm chart, which includes all the components mentioned above.
|
||||
|
||||
### 1. Add the Prometheus Community Helm Repository
|
||||
|
||||
```bash
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
### 2. Create a Namespace for Monitoring
|
||||
|
||||
```bash
|
||||
kubectl create namespace monitoring
|
||||
```
|
||||
|
||||
### 3. Configure Values
|
||||
|
||||
Create a `values.yaml` file with your custom configuration:
|
||||
|
||||
```yaml
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
retention: 15d
|
||||
resources:
|
||||
requests:
|
||||
memory: 256Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 2Gi
|
||||
cpu: 500m
|
||||
storageSpec:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: standard
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
|
||||
alertmanager:
|
||||
alertmanagerSpec:
|
||||
storage:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: standard
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
|
||||
grafana:
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClassName: standard
|
||||
size: 10Gi
|
||||
adminPassword: "prom-operator" # Change this!
|
||||
|
||||
nodeExporter:
|
||||
enabled: true
|
||||
|
||||
kubeStateMetrics:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
### 4. Install the Helm Chart
|
||||
|
||||
```bash
|
||||
helm install prometheus prometheus-community/kube-prometheus-stack \
|
||||
--namespace monitoring \
|
||||
--values values.yaml
|
||||
```
|
||||
|
||||
### 5. Verify the Installation
|
||||
|
||||
Check that all the pods are running:
|
||||
|
||||
```bash
|
||||
kubectl get pods -n monitoring
|
||||
```
|
||||
|
||||
## Accessing the UIs
|
||||
|
||||
By default, the components don't have external access. You can use port-forwarding to access them:
|
||||
|
||||
### Prometheus UI
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n monitoring svc/prometheus-operated 9090:9090
|
||||
```
|
||||
|
||||
Then access Prometheus at http://localhost:9090
|
||||
|
||||
### Grafana
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
|
||||
```
|
||||
|
||||
Then access Grafana at http://localhost:3000 (default credentials: admin/prom-operator)
|
||||
|
||||
### Alertmanager
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n monitoring svc/prometheus-alertmanager 9093:9093
|
||||
```
|
||||
|
||||
Then access Alertmanager at http://localhost:9093
|
||||
|
||||
## For Production: Exposing Services
|
||||
|
||||
For production environments, you'll want to set up proper ingress. Here's an example using a basic Ingress resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: prometheus-ingress
|
||||
namespace: monitoring
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/ssl-redirect: "true"
|
||||
spec:
|
||||
rules:
|
||||
- host: prometheus.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: prometheus-operated
|
||||
port:
|
||||
number: 9090
|
||||
- host: grafana.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: prometheus-grafana
|
||||
port:
|
||||
number: 80
|
||||
- host: alertmanager.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: prometheus-alertmanager
|
||||
port:
|
||||
number: 9093
|
||||
```
|
||||
|
||||
## Configuring Alerting
|
||||
|
||||
### 1. Set Up Alert Rules
|
||||
|
||||
Alert rules can be created using the PrometheusRule custom resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: node-alerts
|
||||
namespace: monitoring
|
||||
labels:
|
||||
release: prometheus
|
||||
spec:
|
||||
groups:
|
||||
- name: node.rules
|
||||
rules:
|
||||
- alert: HighNodeCPU
|
||||
expr: instance:node_cpu_utilisation:rate1m > 0.8
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "High CPU usage on {{ $labels.instance }}"
|
||||
description: "CPU usage is above 80% for 5 minutes on node {{ $labels.instance }}"
|
||||
```
|
||||
|
||||
### 2. Configure Alert Receivers
|
||||
|
||||
Configure Alertmanager to send notifications by creating a Secret with your configuration:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: alertmanager-prometheus-alertmanager
|
||||
namespace: monitoring
|
||||
stringData:
|
||||
alertmanager.yaml: |
|
||||
global:
|
||||
resolve_timeout: 5m
|
||||
slack_api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
|
||||
|
||||
route:
|
||||
group_by: ['job', 'alertname', 'namespace']
|
||||
group_wait: 30s
|
||||
group_interval: 5m
|
||||
repeat_interval: 12h
|
||||
receiver: 'slack-notifications'
|
||||
routes:
|
||||
- receiver: 'slack-notifications'
|
||||
matchers:
|
||||
- severity =~ "warning|critical"
|
||||
|
||||
receivers:
|
||||
- name: 'slack-notifications'
|
||||
slack_configs:
|
||||
- channel: '#alerts'
|
||||
send_resolved: true
|
||||
title: '{{ template "slack.default.title" . }}'
|
||||
text: '{{ template "slack.default.text" . }}'
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
## Custom Dashboards
|
||||
|
||||
Grafana comes pre-configured with several useful dashboards, but you can import more from [Grafana.com](https://grafana.com/grafana/dashboards/).
|
||||
|
||||
Some recommended dashboard IDs to import:
|
||||
- 1860: Node Exporter Full
|
||||
- 12740: Kubernetes Monitoring
|
||||
- 13332: Prometheus Stats
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Insufficient Resources**: Prometheus can be resource-intensive. Adjust resource limits if pods are being OOMKilled.
|
||||
2. **Storage Issues**: Ensure your storage class supports the access modes you've configured.
|
||||
3. **ServiceMonitor not working**: Check that the label selectors match your services.
|
||||
|
||||
## Conclusion
|
||||
|
||||
You now have a fully functional Prometheus monitoring stack for your Kubernetes cluster. This setup provides comprehensive metrics collection, visualization, and alerting capabilities essential for maintaining a healthy and performant cluster.
|
||||
|
||||
In future articles, we'll explore advanced topics like custom exporters, recording rules for performance, and integrating with other observability tools like Loki for logs and Tempo for traces.
|
|
@ -0,0 +1,191 @@
|
|||
---
|
||||
title: Complete Proxmox VE Setup Guide
|
||||
description: A step-by-step guide to setting up Proxmox VE for your home lab virtualization needs
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: infrastructure
|
||||
tags:
|
||||
- proxmox
|
||||
- virtualization
|
||||
- homelab
|
||||
- infrastructure
|
||||
readTime: 12 min read
|
||||
---
|
||||
|
||||
# Complete Proxmox VE Setup Guide
|
||||
|
||||
Proxmox Virtual Environment (VE) is a complete open-source server management platform for enterprise virtualization. It tightly integrates KVM hypervisor and LXC containers, software-defined storage and networking functionality, on a single platform. This guide will walk you through installing and configuring Proxmox VE for your home lab.
|
||||
|
||||
## Why Choose Proxmox VE?
|
||||
|
||||
- **Open Source**: Free to use with optional paid enterprise support
|
||||
- **Full-featured**: Combines KVM hypervisor and LXC containers
|
||||
- **Web Interface**: Easy-to-use management interface
|
||||
- **Clustering**: Built-in high availability features
|
||||
- **Storage Flexibility**: Support for local, SAN, NFS, Ceph, and more
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
- 64-bit CPU with virtualization extensions (Intel VT-x/AMD-V)
|
||||
- At least 2GB RAM (8GB+ recommended)
|
||||
- Hard drive for OS installation (SSD recommended)
|
||||
- Additional storage for VMs and containers
|
||||
- Network interface card
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Prepare for Installation
|
||||
|
||||
1. Download the Proxmox VE ISO from [proxmox.com/downloads](https://www.proxmox.com/downloads)
|
||||
2. Create a bootable USB drive using tools like Rufus, Etcher, or dd
|
||||
3. Ensure virtualization is enabled in your BIOS/UEFI
|
||||
|
||||
### 2. Install Proxmox VE
|
||||
|
||||
1. Boot from the USB drive
|
||||
2. Select "Install Proxmox VE"
|
||||
3. Accept the EULA
|
||||
4. Select the target hard drive (this will erase all data on the drive)
|
||||
5. Configure country, time zone, and keyboard layout
|
||||
6. Set a strong root password and provide an email address
|
||||
7. Configure network settings:
|
||||
- Enter a hostname (FQDN format: proxmox.yourdomain.local)
|
||||
- IP address, netmask, gateway
|
||||
- DNS server
|
||||
8. Review the settings and confirm to start the installation
|
||||
9. Once completed, remove the USB drive and reboot
|
||||
|
||||
### 3. Initial Configuration
|
||||
|
||||
Access the web interface by navigating to `https://<your-proxmox-ip>:8006` in your browser. Log in with:
|
||||
- Username: root
|
||||
- Password: (the one you set during installation)
|
||||
|
||||
## Post-Installation Tasks
|
||||
|
||||
### 1. Update Proxmox VE
|
||||
|
||||
```bash
|
||||
apt update
|
||||
apt dist-upgrade
|
||||
```
|
||||
|
||||
### 2. Remove Subscription Notice (Optional)
|
||||
|
||||
For home lab use, you can remove the subscription notice:
|
||||
|
||||
```bash
|
||||
echo "DPkg::Post-Invoke { \"dpkg -V proxmox-widget-toolkit | grep -q '/proxmoxlib\.js$'; if [ \$? -eq 1 ]; then { echo 'Removing subscription nag from UI...'; sed -i '/.*data\.status.*subscription.*/d' /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js; }; fi\"; };" > /etc/apt/apt.conf.d/no-subscription-warning
|
||||
```
|
||||
|
||||
### 3. Configure Storage
|
||||
|
||||
#### Local Storage
|
||||
|
||||
By default, Proxmox VE creates several storage locations:
|
||||
|
||||
- **local**: For ISO images, container templates, and snippets
|
||||
- **local-lvm**: For VM disk images
|
||||
|
||||
To add a new storage, go to Datacenter > Storage > Add:
|
||||
|
||||
- For local directories: Select "Directory"
|
||||
- For network storage: Select "NFS" or "CIFS"
|
||||
- For block storage: Select "LVM", "LVM-Thin", or "ZFS"
|
||||
|
||||
#### ZFS Storage Pool (Recommended)
|
||||
|
||||
ZFS offers excellent performance and data protection:
|
||||
|
||||
```bash
|
||||
# Create a ZFS pool using available disks
|
||||
zpool create -f rpool /dev/sdb /dev/sdc
|
||||
|
||||
# Add the pool to Proxmox
|
||||
pvesm add zfspool rpool -pool rpool
|
||||
```
|
||||
|
||||
### 4. Set Up Networking
|
||||
|
||||
#### Network Bridges
|
||||
|
||||
Proxmox VE creates a default bridge (vmbr0) during installation. To add more:
|
||||
|
||||
1. Go to Node > Network > Create > Linux Bridge
|
||||
2. Configure the bridge:
|
||||
- Name: vmbr1
|
||||
- IP address/CIDR: 192.168.1.1/24 (or leave empty for unmanaged bridge)
|
||||
- Bridge ports: (physical interface, e.g., eth1)
|
||||
|
||||
#### VLAN Configuration
|
||||
|
||||
For VLAN support:
|
||||
|
||||
1. Ensure the bridge has VLAN awareness enabled
|
||||
2. In VM network settings, specify VLAN tags
|
||||
|
||||
## Creating Virtual Machines and Containers
|
||||
|
||||
### Virtual Machines (KVM)
|
||||
|
||||
1. Go to Create VM
|
||||
2. Fill out the wizard:
|
||||
- General: Name, Resource Pool
|
||||
- OS: ISO image, type, and version
|
||||
- System: BIOS/UEFI, Machine type
|
||||
- Disks: Size, format, storage location
|
||||
- CPU: Cores, type
|
||||
- Memory: RAM size
|
||||
- Network: Bridge, model
|
||||
3. Click Finish to create the VM
|
||||
|
||||
### Containers (LXC)
|
||||
|
||||
1. Go to Create CT
|
||||
2. Fill out the wizard:
|
||||
- General: Hostname, Password
|
||||
- Template: Select from available templates
|
||||
- Disks: Size, storage location
|
||||
- CPU: Cores
|
||||
- Memory: RAM size
|
||||
- Network: IP address, bridge
|
||||
- DNS: DNS servers
|
||||
3. Click Finish to create the container
|
||||
|
||||
## Backup Configuration
|
||||
|
||||
### Setting Up Backups
|
||||
|
||||
1. Go to Datacenter > Backup
|
||||
2. Add a new backup job:
|
||||
- Select storage location
|
||||
- Set schedule (daily, weekly, etc.)
|
||||
- Choose VMs/containers to back up
|
||||
- Configure compression and mode
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### CPU
|
||||
|
||||
For VMs that need consistent performance:
|
||||
|
||||
- Set CPU type to "host" for best performance
|
||||
- Reserve CPU cores for critical VMs
|
||||
- Use CPU pinning for high-performance workloads
|
||||
|
||||
### Memory
|
||||
|
||||
- Enable KSM (Kernel Same-page Merging) for better memory usage
|
||||
- Set appropriate memory ballooning for VMs
|
||||
|
||||
### Storage
|
||||
|
||||
- Use SSDs for VM disks when possible
|
||||
- Enable write-back caching for improved performance
|
||||
- Consider ZFS for important data with appropriate RAM allocation
|
||||
|
||||
## Conclusion
|
||||
|
||||
Proxmox VE is a powerful, flexible virtualization platform perfect for home labs. With its combination of virtual machines and containers, you can build a versatile lab environment for testing, development, and running production services.
|
||||
|
||||
After following this guide, you should have a fully functional Proxmox VE server ready to host your virtual infrastructure. In future articles, we'll explore advanced topics like clustering, high availability, and integration with other infrastructure tools.
|
|
@ -0,0 +1,23 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Script to push blog content to Gitea with proper symlink handling
|
||||
# This ensures actual content is pushed, not just symlinks
|
||||
|
||||
# Navigate to the blog repository
|
||||
cd ~/Projects/laforceit-blog
|
||||
|
||||
# Configure git to handle symlinks correctly
|
||||
git config --local core.symlinks true
|
||||
|
||||
# Add all changes, dereferencing symlinks to include the actual content
|
||||
git add -A --dereference
|
||||
|
||||
# Get commit message from argument or use default
|
||||
COMMIT_MSG=${1:-"Update blog content"}
|
||||
|
||||
# Commit changes
|
||||
git commit -m "$COMMIT_MSG"
|
||||
|
||||
# Push to the repository
|
||||
git push origin main
|
||||
|
|
@ -0,0 +1,416 @@
|
|||
---
|
||||
title: Building a Digital Garden with Quartz, Obsidian, and Astro
|
||||
description: How to transform your Obsidian notes into a beautiful, connected public digital garden using Quartz and Astro
|
||||
pubDate: 2025-04-18
|
||||
updatedDate: 2025-04-19
|
||||
category: Services
|
||||
tags:
|
||||
- quartz
|
||||
- obsidian
|
||||
- digital-garden
|
||||
- knowledge-management
|
||||
- astro
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
I've been taking digital notes for decades now. From simple `.txt` files to OneNote, Evernote, Notion, and now Obsidian. But for years, I've been wrestling with a question: how do I share my knowledge with others in a way that preserves the connections between ideas?
|
||||
|
||||
Enter [Quartz](https://quartz.jzhao.xyz/) - an open-source static site generator designed specifically for transforming Obsidian vaults into interconnected digital gardens. I've been using it with Astro to power this very blog, and today I want to show you how you can do the same.
|
||||
|
||||
## What is a Digital Garden?
|
||||
|
||||
Before we dive into the technical stuff, let's talk about what a digital garden actually is. Unlike traditional blogs organized chronologically, a digital garden is more like... well, a garden! It's a collection of interconnected notes that grow over time.
|
||||
|
||||
Think of each note as a plant. Some are seedlings (early ideas), some are in full bloom (well-developed thoughts), and others might be somewhere in between. The beauty of a digital garden is that it evolves organically, with connections forming between ideas just as they do in our brains.
|
||||
|
||||
## Why Quartz + Obsidian + Astro?
|
||||
|
||||
I settled on this stack after trying numerous solutions, and here's why:
|
||||
|
||||
- **Obsidian**: Provides a fantastic editing experience with bi-directional linking, graph views, and markdown support.
|
||||
- **Quartz**: Transforms Obsidian vaults into interconnected websites with minimal configuration.
|
||||
- **Astro**: Adds flexibility for custom layouts, components, and integrations not available in Quartz alone.
|
||||
|
||||
It's a match made in heaven - I get the best note-taking experience with Obsidian, the connection-preserving features of Quartz, and the full power of a modern web framework with Astro.
|
||||
|
||||
## Setting Up Your Digital Garden
|
||||
|
||||
Let's walk through how to set this up step by step.
|
||||
|
||||
### Step 1: Install Obsidian and Create a Vault
|
||||
|
||||
If you haven't already, [download Obsidian](https://obsidian.md/) and create a new vault for your digital garden content. I recommend having a separate vault for public content to keep it clean.
|
||||
|
||||
### Step 2: Set Up Quartz
|
||||
|
||||
Quartz is essentially a template for your Obsidian content. Here's how to get it running:
|
||||
|
||||
```bash
|
||||
# Clone the Quartz repository
|
||||
git clone https://github.com/jackyzha0/quartz.git
|
||||
cd quartz
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Copy your Obsidian vault content to the content folder
|
||||
# (or symlink it, which is what I do)
|
||||
ln -s /path/to/your/obsidian/vault content
|
||||
```
|
||||
|
||||
Once installed, you can customize your Quartz configuration in the `quartz.config.ts` file. Here's a snippet of mine:
|
||||
|
||||
```typescript
|
||||
// quartz.config.ts
|
||||
const config: QuartzConfig = {
|
||||
configuration: {
|
||||
pageTitle: "LaForceIT Digital Garden",
|
||||
enableSPA: true,
|
||||
enablePopovers: true,
|
||||
analytics: {
|
||||
provider: "plausible",
|
||||
},
|
||||
baseUrl: "blog.laforceit.com",
|
||||
ignorePatterns: ["private", "templates", ".obsidian"],
|
||||
theme: {
|
||||
typography: {
|
||||
header: "Space Grotesk",
|
||||
body: "Space Grotesk",
|
||||
code: "JetBrains Mono",
|
||||
},
|
||||
colors: {
|
||||
lightMode: {
|
||||
light: "#fafafa",
|
||||
lightgray: "#e5e5e5",
|
||||
gray: "#b8b8b8",
|
||||
darkgray: "#4e4e4e",
|
||||
dark: "#2b2b2b",
|
||||
secondary: "#284b63",
|
||||
tertiary: "#84a59d",
|
||||
highlight: "rgba(143, 159, 169, 0.15)",
|
||||
},
|
||||
darkMode: {
|
||||
light: "#050a18",
|
||||
lightgray: "#0f172a",
|
||||
gray: "#1e293b",
|
||||
darkgray: "#94a3b8",
|
||||
dark: "#e2e8f0",
|
||||
secondary: "#06b6d4",
|
||||
tertiary: "#3b82f6",
|
||||
highlight: "rgba(6, 182, 212, 0.15)",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
plugins: {
|
||||
transformers: [
|
||||
// plugins here
|
||||
],
|
||||
filters: [
|
||||
// filters here
|
||||
],
|
||||
emitters: [
|
||||
// emitters here
|
||||
],
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Step 3: Integrating with Astro
|
||||
|
||||
Now, where Quartz ends and Astro begins is where the magic really happens. Here's how I integrated them:
|
||||
|
||||
1. Create a new Astro project:
|
||||
|
||||
```bash
|
||||
npm create astro@latest my-digital-garden
|
||||
cd my-digital-garden
|
||||
```
|
||||
|
||||
2. Set up your Astro project structure:
|
||||
|
||||
```
|
||||
my-digital-garden/
|
||||
├── public/
|
||||
├── src/
|
||||
│ ├── components/
|
||||
│ ├── layouts/
|
||||
│ ├── pages/
|
||||
│ │ └── index.astro
|
||||
│ └── content/ (this will point to your Quartz content)
|
||||
├── astro.config.mjs
|
||||
└── package.json
|
||||
```
|
||||
|
||||
3. Configure Astro to work with Quartz's output:
|
||||
|
||||
```typescript
|
||||
// astro.config.mjs
|
||||
import { defineConfig } from 'astro/config';
|
||||
|
||||
export default defineConfig({
|
||||
integrations: [
|
||||
// Your integrations here
|
||||
],
|
||||
markdown: {
|
||||
remarkPlugins: [
|
||||
// The same remark plugins used in Quartz
|
||||
],
|
||||
rehypePlugins: [
|
||||
// The same rehype plugins used in Quartz
|
||||
]
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
4. Create a component for the Obsidian graph view (similar to what I use on this blog):
|
||||
|
||||
```astro
|
||||
---
|
||||
// Graph.astro
|
||||
---
|
||||
|
||||
<div class="graph-container">
|
||||
<div class="graph-visualization">
|
||||
<div class="graph-overlay"></div>
|
||||
<div class="graph-nodes" id="obsidian-graph"></div>
|
||||
</div>
|
||||
<div class="graph-caption">Interactive visualization of interconnected notes</div>
|
||||
</div>
|
||||
|
||||
<script src="/blog/scripts/neural-network.js" defer></script>
|
||||
|
||||
<style>
|
||||
.graph-container {
|
||||
width: 100%;
|
||||
height: 400px;
|
||||
position: relative;
|
||||
border-radius: 0.75rem;
|
||||
overflow: hidden;
|
||||
background: rgba(15, 23, 42, 0.6);
|
||||
border: 1px solid rgba(56, 189, 248, 0.2);
|
||||
}
|
||||
|
||||
.graph-visualization {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.graph-overlay {
|
||||
position: absolute;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
background: radial-gradient(
|
||||
circle at center,
|
||||
transparent 0%,
|
||||
rgba(15, 23, 42, 0.5) 100%
|
||||
);
|
||||
z-index: 1;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.graph-nodes {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
position: relative;
|
||||
z-index: 2;
|
||||
}
|
||||
|
||||
.graph-caption {
|
||||
position: absolute;
|
||||
bottom: 0.75rem;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
text-align: center;
|
||||
color: rgba(148, 163, 184, 0.8);
|
||||
font-size: 0.875rem;
|
||||
z-index: 3;
|
||||
}
|
||||
</style>
|
||||
```
|
||||
|
||||
## Writing Content for Your Digital Garden
|
||||
|
||||
Now that you have the technical setup, let's talk about how to structure your content. Here's how I approach it:
|
||||
|
||||
### 1. Use Meaningful Frontmatter
|
||||
|
||||
Each note should have frontmatter that helps organize and categorize it:
|
||||
|
||||
```markdown
|
||||
---
|
||||
title: "Building a Digital Garden"
|
||||
description: "How to create a connected knowledge base with Obsidian and Quartz"
|
||||
pubDate: 2023-11-05
|
||||
updatedDate: 2024-02-10
|
||||
category: "Knowledge Management"
|
||||
tags: ["digital-garden", "obsidian", "quartz", "notes"]
|
||||
---
|
||||
```
|
||||
|
||||
### 2. Create Meaningful Connections
|
||||
|
||||
The power of a digital garden comes from connections. Use Obsidian's `[[wiki-links]]` liberally to connect related concepts:
|
||||
|
||||
```markdown
|
||||
I'm using the [[quartz-digital-garden|Quartz framework]] to power my digital garden. It works well with [[obsidian-setup|my Obsidian workflow]].
|
||||
```
|
||||
|
||||
### 3. Use Consistent Structure for Your Notes
|
||||
|
||||
I follow a template for most of my notes to maintain consistency:
|
||||
|
||||
- Brief introduction/definition
|
||||
- Why it matters
|
||||
- Key concepts
|
||||
- Examples or code snippets
|
||||
- References to related notes
|
||||
- External resources
|
||||
|
||||
### 4. Leverage Obsidian Features
|
||||
|
||||
Make use of Obsidian's unique features that Quartz preserves:
|
||||
|
||||
- **Callouts** for highlighting important information
|
||||
- **Dataview** for creating dynamic content (if using the Dataview plugin)
|
||||
- **Graphs** to visualize connections
|
||||
|
||||
## Deploying Your Digital Garden
|
||||
|
||||
Here's how I deploy my digital garden to Cloudflare Pages:
|
||||
|
||||
1. **Build Configuration**:
|
||||
|
||||
```javascript
|
||||
// Build command
|
||||
astro build
|
||||
|
||||
// Output directory
|
||||
dist
|
||||
```
|
||||
|
||||
2. **Automated Deployment from Git**:
|
||||
|
||||
I have a GitHub action that publishes my content whenever I push changes:
|
||||
|
||||
```yaml
|
||||
name: Deploy to Cloudflare Pages
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
submodules: true
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: 18
|
||||
- name: Install Dependencies
|
||||
run: npm ci
|
||||
- name: Build
|
||||
run: npm run build
|
||||
- name: Deploy to Cloudflare Pages
|
||||
uses: cloudflare/pages-action@v1
|
||||
with:
|
||||
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
|
||||
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
|
||||
projectName: digital-garden
|
||||
directory: dist
|
||||
```
|
||||
|
||||
## My Digital Garden Workflow
|
||||
|
||||
Here's my actual workflow for maintaining my digital garden:
|
||||
|
||||
1. **Daily note-taking** in Obsidian (private vault)
|
||||
2. **Weekly review** where I refine notes and connections
|
||||
3. **Publishing prep** where I move polished notes to my public vault
|
||||
4. **Git commit and push** which triggers the deployment
|
||||
|
||||
The beauty of this system is that my private thinking and public sharing use the same tools and formats, reducing friction between capturing thoughts and sharing them.
|
||||
|
||||
## Adding Interactive Elements
|
||||
|
||||
One of my favorite parts of using Astro with Quartz is that I can add interactive elements to my digital garden. For example, the neural graph visualization you see on this blog:
|
||||
|
||||
```javascript
|
||||
// Simplified version of the neural network graph code
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
const container = document.querySelector('.graph-nodes');
|
||||
if (!container) return;
|
||||
|
||||
// Configuration
|
||||
const CONNECTION_DISTANCE = 30;
|
||||
const MIN_NODE_SIZE = 4;
|
||||
const MAX_NODE_SIZE = 12;
|
||||
|
||||
// Fetch blog data from the same origin
|
||||
async function fetchQuartzData() {
|
||||
// In production, fetch from your actual API
|
||||
return {
|
||||
nodes: [
|
||||
{
|
||||
id: 'digital-garden',
|
||||
title: 'Digital Garden',
|
||||
tags: ['knowledge-management', 'notes'],
|
||||
category: 'concept'
|
||||
},
|
||||
// More nodes here
|
||||
],
|
||||
links: [
|
||||
{ source: 'digital-garden', target: 'obsidian' },
|
||||
// More links here
|
||||
]
|
||||
};
|
||||
}
|
||||
|
||||
// Create the visualization
|
||||
fetchQuartzData().then(graphData => {
|
||||
// Create nodes and connections
|
||||
// (implementation details omitted for brevity)
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Tips for a Successful Digital Garden
|
||||
|
||||
After maintaining my digital garden for over a year, here are my top tips:
|
||||
|
||||
1. **Start small** - Begin with a few well-connected notes rather than trying to publish everything at once.
|
||||
|
||||
2. **Focus on connections** - The value is in the links between notes, not just the notes themselves.
|
||||
|
||||
3. **Embrace imperfection** - Digital gardens are meant to grow and evolve; they're never "finished."
|
||||
|
||||
4. **Build in public** - Share your process and learnings as you go.
|
||||
|
||||
5. **Use consistent formatting** - Makes it easier for readers to navigate your garden.
|
||||
|
||||
## The Impact of My Digital Garden
|
||||
|
||||
Since starting my digital garden, I've experienced several unexpected benefits:
|
||||
|
||||
- **Clearer thinking** - Writing for an audience forces me to clarify my thoughts.
|
||||
- **Unexpected connections** - I've discovered relationships between ideas I hadn't noticed before.
|
||||
- **Community building** - I've connected with others interested in the same topics.
|
||||
- **Learning accountability** - Publishing regularly motivates me to keep learning.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
Building a digital garden with Quartz, Obsidian, and Astro has transformed how I learn and share knowledge. It's become so much more than a blog - it's a living representation of my thinking that grows more valuable with each connection I make.
|
||||
|
||||
If you're considering starting your own digital garden, I hope this guide gives you a solid foundation. Remember, the best garden is the one you actually tend to, so start simple and let it grow naturally over time.
|
||||
|
||||
What about you? Are you maintaining a digital garden or thinking about starting one? I'd love to hear about your experience in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on February 10, 2024 with information about the latest Quartz configuration options and integration with Astro._
|
|
@ -0,0 +1,328 @@
|
|||
---
|
||||
title: "Managing Kubernetes with Rancher: The Home Lab Way"
|
||||
description: How to set up, configure, and get the most out of Rancher for managing your home Kubernetes clusters
|
||||
pubDate: 2025-04-19
|
||||
updatedDate: 2025-04-18
|
||||
category: Services
|
||||
tags:
|
||||
- rancher
|
||||
- kubernetes
|
||||
- k3s
|
||||
- devops
|
||||
- containers
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
I've been running Kubernetes at home for years now, and I've tried just about every management tool out there. From kubectl and a bunch of YAML files to various dashboards and UIs, I've experimented with it all. But the one tool that's been a constant in my home lab journey is [Rancher](https://rancher.com/) - a complete container management platform that makes Kubernetes management almost... dare I say it... enjoyable?
|
||||
|
||||
Today, I want to walk you through setting up Rancher in your home lab and show you some of the features that have made it indispensable for me.
|
||||
|
||||
## What is Rancher and Why Should You Care?
|
||||
|
||||
Rancher is an open-source platform for managing Kubernetes clusters. Think of it as mission control for all your container workloads. It gives you:
|
||||
|
||||
- A unified interface for managing multiple clusters (perfect if you're running different K8s distros)
|
||||
- Simplified deployment of applications via apps & marketplace
|
||||
- Built-in monitoring, logging, and alerting
|
||||
- User management and role-based access control
|
||||
- A clean, intuitive UI that's actually useful (rare in the Kubernetes world!)
|
||||
|
||||
If you're running even a single Kubernetes cluster at home, Rancher can save you countless hours of typing `kubectl` commands and editing YAML files by hand.
|
||||
|
||||
## Setting Up Rancher in Your Home Lab
|
||||
|
||||
There are several ways to deploy Rancher, but I'll focus on two approaches that work well for home labs.
|
||||
|
||||
### Option 1: Docker Deployment (Quickstart)
|
||||
|
||||
The fastest way to get up and running is with Docker:
|
||||
|
||||
```bash
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
That's it! Navigate to `https://your-server-ip` and you'll be prompted to set a password and server URL.
|
||||
|
||||
But while this method is quick, I prefer the next approach for a more production-like setup.
|
||||
|
||||
### Option 2: Installing Rancher on K3s
|
||||
|
||||
My preferred method is to run Rancher on a lightweight Kubernetes distribution like K3s. This gives you better reliability and easier upgrades.
|
||||
|
||||
First, install K3s:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
Next, install cert-manager (required for Rancher to manage certificates):
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.12.2/cert-manager.yaml
|
||||
```
|
||||
|
||||
Then, install Rancher using Helm:
|
||||
|
||||
```bash
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
helm repo update
|
||||
|
||||
kubectl create namespace cattle-system
|
||||
|
||||
helm install rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.yourdomain.com \
|
||||
--set bootstrapPassword=admin
|
||||
```
|
||||
|
||||
Depending on your home lab setup, you might want to use a load balancer or ingress controller. I use Traefik, which comes pre-installed with K3s:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: rancher
|
||||
namespace: cattle-system
|
||||
spec:
|
||||
rules:
|
||||
- host: rancher.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: rancher
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- rancher.yourdomain.com
|
||||
secretName: rancher-tls
|
||||
```
|
||||
|
||||
## Importing Your Existing Clusters
|
||||
|
||||
Once Rancher is running, you can import your existing Kubernetes clusters. This is my favorite part because it doesn't require you to rebuild anything.
|
||||
|
||||
1. In the Rancher UI, go to "Cluster Management"
|
||||
2. Click "Import Existing"
|
||||
3. Choose a name for your cluster
|
||||
4. Copy the provided kubectl command and run it on your existing cluster
|
||||
|
||||
Rancher will install its agent on your cluster and begin managing it. Magic!
|
||||
|
||||
## Setting Up Monitoring
|
||||
|
||||
Rancher makes it dead simple to deploy Prometheus and Grafana for monitoring:
|
||||
|
||||
1. From your cluster's dashboard, go to "Apps"
|
||||
2. Select "Monitoring" from the Charts
|
||||
3. Install with default settings (or customize as needed)
|
||||
|
||||
In minutes, you'll have a full monitoring stack with pre-configured dashboards for nodes, pods, workloads, and more.
|
||||
|
||||
Here's what my Grafana dashboard looks like for my home K8s cluster:
|
||||
|
||||

|
||||
|
||||
## Creating Deployments Through the UI
|
||||
|
||||
While I'm a big fan of GitOps and declarative deployments, sometimes you just want to quickly spin up a container without writing YAML. Rancher makes this painless:
|
||||
|
||||
1. Go to your cluster
|
||||
2. Select "Workload > Deployments"
|
||||
3. Click "Create"
|
||||
4. Fill in the form with your container details
|
||||
|
||||
You get a nice UI for setting environment variables, volumes, health checks, and more. Once you're happy with it, Rancher generates and applies the YAML behind the scenes.
|
||||
|
||||
## Rancher Fleet for GitOps
|
||||
|
||||
One of the newer features I love is Fleet, Rancher's GitOps engine. It allows you to manage deployments across clusters using Git repositories:
|
||||
|
||||
```yaml
|
||||
# Example fleet.yaml
|
||||
defaultNamespace: monitoring
|
||||
helm:
|
||||
releaseName: prometheus
|
||||
repo: https://prometheus-community.github.io/helm-charts
|
||||
chart: kube-prometheus-stack
|
||||
version: 39.4.0
|
||||
values:
|
||||
grafana:
|
||||
adminPassword: ${GRAFANA_PASSWORD}
|
||||
targets:
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
environment: production
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
environment: development
|
||||
helm:
|
||||
values:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1Gi
|
||||
requests:
|
||||
memory: 512Mi
|
||||
```
|
||||
|
||||
With Fleet, I maintain a Git repository with all my deployments, and Rancher automatically applies them to the appropriate clusters. When I push changes, they're automatically deployed - proper GitOps!
|
||||
|
||||
## Rancher for Projects and Teams
|
||||
|
||||
If you're working with a team or want to compartmentalize your applications, Rancher's projects feature is fantastic:
|
||||
|
||||
1. Create different projects within a cluster (e.g., "Media," "Home Automation," "Development")
|
||||
2. Assign namespaces to projects
|
||||
3. Set resource quotas for each project
|
||||
4. Create users and assign them to projects with specific permissions
|
||||
|
||||
This way, you can give friends or family members access to specific applications without worrying about them breaking your critical services.
|
||||
|
||||
## Advanced: Custom Cluster Templates
|
||||
|
||||
As my home lab grew, I started using Rancher's cluster templates to ensure consistency across my Kubernetes installations:
|
||||
|
||||
```yaml
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: ClusterTemplate
|
||||
metadata:
|
||||
name: homelab-standard
|
||||
spec:
|
||||
displayName: HomeStack Standard
|
||||
revisionName: homelab-standard-v1
|
||||
members:
|
||||
- accessType: owner
|
||||
userPrincipalName: user-abc123
|
||||
template:
|
||||
spec:
|
||||
rancherKubernetesEngineConfig:
|
||||
services:
|
||||
etcd:
|
||||
backupConfig:
|
||||
enabled: true
|
||||
intervalHours: 12
|
||||
retention: 6
|
||||
kubeApi:
|
||||
auditLog:
|
||||
enabled: true
|
||||
network:
|
||||
plugin: canal
|
||||
monitoring:
|
||||
provider: metrics-server
|
||||
addons: |-
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cert-manager
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress-nginx
|
||||
```
|
||||
|
||||
## My Top Rancher Tips
|
||||
|
||||
After years of using Rancher, here are my top tips:
|
||||
|
||||
1. **Use the Rancher CLI**: For repetitive tasks, the CLI is faster than the UI:
|
||||
```bash
|
||||
rancher login https://rancher.yourdomain.com --token token-abc123
|
||||
rancher kubectl get nodes
|
||||
```
|
||||
|
||||
2. **Set Up External Authentication**: Connect Rancher to your identity provider (I use GitHub):
|
||||
```yaml
|
||||
# Sample GitHub auth config
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: AuthConfig
|
||||
metadata:
|
||||
name: github
|
||||
type: githubConfig
|
||||
properties:
|
||||
enabled: true
|
||||
clientId: your-github-client-id
|
||||
clientSecret: your-github-client-secret
|
||||
allowedPrincipals:
|
||||
- github_user://your-github-username
|
||||
- github_org://your-github-org
|
||||
```
|
||||
|
||||
3. **Create Node Templates**: If you're using RKE, save node templates for quick cluster expansion.
|
||||
|
||||
4. **Use App Templates**: Save your common applications as templates for quick deployment.
|
||||
|
||||
5. **Set Up Alerts**: Configure alerts for node health, pod failures, and resource constraints.
|
||||
|
||||
## Dealing with Common Rancher Issues
|
||||
|
||||
Even the best tools have their quirks. Here are some issues I've encountered and how I solved them:
|
||||
|
||||
### Issue: Rancher UI Becomes Slow
|
||||
|
||||
If your Rancher UI starts lagging, check your browser's local storage. The Rancher UI caches a lot of data, which can build up over time:
|
||||
|
||||
```javascript
|
||||
// Run this in your browser console while on the Rancher page
|
||||
localStorage.clear()
|
||||
```
|
||||
|
||||
### Issue: Certificate Errors After DNS Changes
|
||||
|
||||
If you change your domain or DNS settings, Rancher certificates might need to be regenerated:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system delete secret tls-rancher-ingress
|
||||
kubectl -n cattle-system delete secret tls-ca
|
||||
```
|
||||
|
||||
Then restart the Rancher pods:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system rollout restart deploy/rancher
|
||||
```
|
||||
|
||||
### Issue: Stuck Cluster Imports
|
||||
|
||||
If a cluster gets stuck during import, clean up the agent resources and try again:
|
||||
|
||||
```bash
|
||||
kubectl delete clusterrole cattle-admin cluster-owner
|
||||
kubectl delete clusterrolebinding cattle-admin-binding cluster-owner
|
||||
kubectl delete namespace cattle-system
|
||||
```
|
||||
|
||||
## The Future of Rancher
|
||||
|
||||
With SUSE's acquisition of Rancher Labs, the future looks bright. The latest Rancher updates have added:
|
||||
|
||||
- Better integration with cloud providers
|
||||
- Improved security features
|
||||
- Enhanced multi-cluster management
|
||||
- Lower resource requirements (great for home labs)
|
||||
|
||||
My wish list for future versions includes:
|
||||
|
||||
- Native GitOps for everything (not just workloads)
|
||||
- Better templating for one-click deployments
|
||||
- More pre-configured monitoring dashboards
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
Rancher has transformed how I manage my home Kubernetes clusters. What used to be a complex, time-consuming task is now almost... fun? If you're running Kubernetes at home and haven't tried Rancher yet, you're missing out on one of the best tools in the Kubernetes ecosystem.
|
||||
|
||||
Sure, you could manage everything with kubectl and YAML files (and I still do that sometimes), but having a well-designed UI for management, monitoring, and troubleshooting saves countless hours and reduces the learning curve for those just getting started with Kubernetes.
|
||||
|
||||
Are you using Rancher or another tool to manage your Kubernetes clusters? What's been your experience? Let me know in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on March 5, 2024 with information about Rancher v2.7 features and Fleet GitOps capabilities._
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Starting My Digital Garden
|
||||
pubDate: 2025-04-19
|
||||
tags:
|
||||
- blog
|
||||
- meta
|
||||
- digital-garden
|
||||
draft: false
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
# Starting My Digital Garden
|
||||
|
||||
## Introduction
|
||||
|
||||
Today I'm launching my public digital garden - a space where I'll share my thoughts, ideas, and projects with the world.
|
||||
|
||||
## What to Expect
|
||||
|
||||
This site contains two main sections:
|
||||
|
||||
1. **Journal** - Daily notes and thoughts, more raw and in-the-moment
|
||||
2. **Blog** - More structured and polished articles on various topics
|
||||
|
||||
I'll be using my daily journal to capture ideas as they happen, and some of these will evolve into more detailed blog posts over time.
|
||||
|
||||
## Why a Digital Garden?
|
||||
|
||||
Unlike traditional blogs that are often published, then forgotten, a digital garden is meant to grow and evolve over time. I'll be revisiting and updating content as my thoughts and understanding develop.
|
||||
|
||||
## Topics I'll Cover
|
||||
|
||||
- Technology projects I'm working on
|
||||
- Learning notes and discoveries
|
||||
- Workflow and productivity systems
|
||||
- Occasional personal reflections
|
||||
|
||||
## Stay Connected
|
||||
|
||||
Feel free to check back regularly to see what's growing in this garden. The journal section will be updated most frequently, while blog posts will appear when ideas have had time to mature.
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
title: Test Post
|
||||
pubDate: 2024-03-20
|
||||
description: This is a test post to verify the blog setup
|
||||
category: Test
|
||||
tags:
|
||||
- test
|
||||
draft: true
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
# Test Post
|
||||
|
||||
This is a test post to verify that the blog setup is working correctly.
|
|
@ -0,0 +1,517 @@
|
|||
---
|
||||
title: Setting Up VS Code Server for Remote Development Anywhere
|
||||
description: How to set up and configure VS Code Server for seamless remote development from any device
|
||||
pubDate: 2023-04-18
|
||||
updatedDate: 2024-04-19
|
||||
category: Services
|
||||
tags:
|
||||
- vscode
|
||||
- remote-development
|
||||
- self-hosted
|
||||
- coding
|
||||
- homelab
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
If you're like me, you probably find yourself coding on multiple devices - maybe a desktop at home, a laptop when traveling, or even occasionally on a tablet. For years, keeping development environments in sync was a pain point. Enter [VS Code Server](https://code.visualstudio.com/docs/remote/vscode-server), the solution that has completely transformed my development workflow.
|
||||
|
||||
Today, I want to show you how to set up your own self-hosted VS Code Server that lets you code from literally any device with a web browser, all while using your powerful home server for the heavy lifting.
|
||||
|
||||
## Why VS Code Server?
|
||||
|
||||
Before we dive into the setup, let's talk about why you might want this:
|
||||
|
||||
- **Consistent environment**: The same development setup, extensions, and configurations regardless of which device you're using.
|
||||
- **Resource optimization**: Run resource-intensive tasks (builds, tests) on your powerful server instead of your laptop.
|
||||
- **Work from anywhere**: Access your development environment from any device with a browser, even an iPad or a borrowed computer.
|
||||
- **Seamless switching**: Start working on one device and continue on another without missing a beat.
|
||||
|
||||
I've been using this setup for months now, and it's been a game-changer for my productivity. Let's get into the setup!
|
||||
|
||||
## Setting Up VS Code Server
|
||||
|
||||
There are a few ways to run VS Code Server. I'll cover the official method and my preferred Docker approach.
|
||||
|
||||
### Option 1: Official CLI Installation
|
||||
|
||||
The VS Code team provides a CLI for setting up the server:
|
||||
|
||||
```bash
|
||||
# Download and install the CLI
|
||||
curl -fsSL https://code.visualstudio.com/sha/download?build=stable&os=cli-alpine-x64 -o vscode_cli.tar.gz
|
||||
tar -xf vscode_cli.tar.gz
|
||||
sudo mv code /usr/local/bin/
|
||||
|
||||
# Start the server
|
||||
code serve-web --accept-server-license-terms --host 0.0.0.0
|
||||
```
|
||||
|
||||
This method is straightforward but requires you to manage the process yourself.
|
||||
|
||||
### Option 2: Docker Installation (My Preference)
|
||||
|
||||
I prefer using Docker for easier updates and management. Here's my `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
code-server:
|
||||
image: linuxserver/code-server:latest
|
||||
container_name: code-server
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
- PASSWORD=your_secure_password # Consider using Docker secrets instead
|
||||
- SUDO_PASSWORD=your_sudo_password # Optional
|
||||
- PROXY_DOMAIN=code.yourdomain.com # Optional
|
||||
volumes:
|
||||
- ./config:/config
|
||||
- /path/to/your/projects:/projects
|
||||
- /path/to/your/home:/home/coder
|
||||
ports:
|
||||
- 8443:8443
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
Run it with:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Your VS Code Server will be available at `https://your-server-ip:8443`.
|
||||
|
||||
### Option 3: Kubernetes Deployment
|
||||
|
||||
For those running Kubernetes (like me), here's a YAML manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: vscode-server
|
||||
namespace: development
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: vscode-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: vscode-server
|
||||
spec:
|
||||
containers:
|
||||
- name: vscode-server
|
||||
image: linuxserver/code-server:latest
|
||||
env:
|
||||
- name: PUID
|
||||
value: "1000"
|
||||
- name: PGID
|
||||
value: "1000"
|
||||
- name: TZ
|
||||
value: "America/Los_Angeles"
|
||||
- name: PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: vscode-secrets
|
||||
key: password
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
- name: projects
|
||||
mountPath: /projects
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: vscode-config
|
||||
- name: projects
|
||||
persistentVolumeClaim:
|
||||
claimName: vscode-projects
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: vscode-server
|
||||
namespace: development
|
||||
spec:
|
||||
selector:
|
||||
app: vscode-server
|
||||
ports:
|
||||
- port: 8443
|
||||
targetPort: 8443
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: vscode-server
|
||||
namespace: development
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: nginx
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- code.yourdomain.com
|
||||
secretName: vscode-tls
|
||||
rules:
|
||||
- host: code.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: vscode-server
|
||||
port:
|
||||
number: 8443
|
||||
```
|
||||
|
||||
## Accessing VS Code Server Securely
|
||||
|
||||
You don't want to expose your development environment directly to the internet without proper security. Here are my recommendations:
|
||||
|
||||
### 1. Use a Reverse Proxy with SSL
|
||||
|
||||
I use Traefik as a reverse proxy with automatic SSL certificate generation:
|
||||
|
||||
```yaml
|
||||
# traefik.yml dynamic config
|
||||
http:
|
||||
routers:
|
||||
vscode:
|
||||
rule: "Host(`code.yourdomain.com`)"
|
||||
service: "vscode"
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
services:
|
||||
vscode:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://localhost:8443"
|
||||
```
|
||||
|
||||
### 2. Set Up Authentication
|
||||
|
||||
The LinuxServer image already includes basic authentication, but you can add another layer with something like Authelia:
|
||||
|
||||
```yaml
|
||||
# authelia configuration.yml
|
||||
access_control:
|
||||
default_policy: deny
|
||||
rules:
|
||||
- domain: code.yourdomain.com
|
||||
policy: two_factor
|
||||
subject: "group:developers"
|
||||
```
|
||||
|
||||
### 3. Use Cloudflare Tunnel
|
||||
|
||||
For ultimate security, I use a Cloudflare Tunnel to avoid exposing any ports:
|
||||
|
||||
```yaml
|
||||
# cloudflared config.yml
|
||||
tunnel: your-tunnel-id
|
||||
credentials-file: /etc/cloudflared/creds.json
|
||||
|
||||
ingress:
|
||||
- hostname: code.yourdomain.com
|
||||
service: http://localhost:8443
|
||||
originRequest:
|
||||
noTLSVerify: true
|
||||
- service: http_status:404
|
||||
```
|
||||
|
||||
## Configuring Your VS Code Server Environment
|
||||
|
||||
Once your server is running, it's time to set it up for optimal productivity:
|
||||
|
||||
### 1. Install Essential Extensions
|
||||
|
||||
Here are the must-have extensions I install first:
|
||||
|
||||
```bash
|
||||
# From the VS Code terminal
|
||||
code-server --install-extension ms-python.python
|
||||
code-server --install-extension ms-azuretools.vscode-docker
|
||||
code-server --install-extension dbaeumer.vscode-eslint
|
||||
code-server --install-extension esbenp.prettier-vscode
|
||||
code-server --install-extension github.copilot
|
||||
code-server --install-extension golang.go
|
||||
```
|
||||
|
||||
Or you can install them through the Extensions marketplace in the UI.
|
||||
|
||||
### 2. Configure Settings Sync
|
||||
|
||||
To keep your settings in sync between instances:
|
||||
|
||||
1. Open the Command Palette (Ctrl+Shift+P)
|
||||
2. Search for "Settings Sync: Turn On"
|
||||
3. Sign in with your GitHub or Microsoft account
|
||||
|
||||
### 3. Set Up Git Authentication
|
||||
|
||||
For seamless Git operations:
|
||||
|
||||
```bash
|
||||
# Generate a new SSH key if needed
|
||||
ssh-keygen -t ed25519 -C "your_email@example.com"
|
||||
|
||||
# Add to your GitHub/GitLab account
|
||||
cat ~/.ssh/id_ed25519.pub
|
||||
|
||||
# Configure Git
|
||||
git config --global user.name "Your Name"
|
||||
git config --global user.email "your_email@example.com"
|
||||
```
|
||||
|
||||
## Power User Features
|
||||
|
||||
Now let's look at some advanced configurations that make VS Code Server even more powerful:
|
||||
|
||||
### 1. Workspace Launcher
|
||||
|
||||
I created a simple HTML page that lists all my projects for quick access:
|
||||
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>LaForceIT Workspace Launcher</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
background-color: #1e1e1e;
|
||||
color: #d4d4d4;
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
.project-list {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
|
||||
gap: 15px;
|
||||
margin-top: 20px;
|
||||
}
|
||||
.project-card {
|
||||
background-color: #252526;
|
||||
border-radius: 5px;
|
||||
padding: 15px;
|
||||
transition: transform 0.2s;
|
||||
}
|
||||
.project-card:hover {
|
||||
transform: translateY(-5px);
|
||||
background-color: #2d2d2d;
|
||||
}
|
||||
a {
|
||||
color: #569cd6;
|
||||
text-decoration: none;
|
||||
}
|
||||
h1 { color: #569cd6; }
|
||||
h2 { color: #4ec9b0; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>LaForceIT Workspace Launcher</h1>
|
||||
<div class="project-list">
|
||||
<div class="project-card">
|
||||
<h2>ArgoBox</h2>
|
||||
<p>My Kubernetes home lab platform</p>
|
||||
<a href="/projects/argobox">Open Workspace</a>
|
||||
</div>
|
||||
<div class="project-card">
|
||||
<h2>Blog</h2>
|
||||
<p>LaForceIT blog and digital garden</p>
|
||||
<a href="/projects/blog">Open Workspace</a>
|
||||
</div>
|
||||
<!-- Add more projects as needed -->
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### 2. Custom Terminal Profile
|
||||
|
||||
Add this to your `settings.json` for a better terminal experience:
|
||||
|
||||
```json
|
||||
"terminal.integrated.profiles.linux": {
|
||||
"bash": {
|
||||
"path": "bash",
|
||||
"icon": "terminal-bash",
|
||||
"args": ["-l"]
|
||||
},
|
||||
"zsh": {
|
||||
"path": "zsh"
|
||||
},
|
||||
"customProfile": {
|
||||
"path": "bash",
|
||||
"args": ["-c", "neofetch && bash -l"],
|
||||
"icon": "terminal-bash",
|
||||
"overrideName": true
|
||||
}
|
||||
},
|
||||
"terminal.integrated.defaultProfile.linux": "customProfile"
|
||||
```
|
||||
|
||||
### 3. Persistent Development Containers
|
||||
|
||||
I use Docker-in-Docker to enable VS Code Dev Containers:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
code-server:
|
||||
# ... other config from above
|
||||
volumes:
|
||||
# ... other volumes
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
# ... other env vars
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
```
|
||||
|
||||
## Real-World Examples: How I Use VS Code Server
|
||||
|
||||
Let me share a few real-world examples of how I use this setup:
|
||||
|
||||
### Example 1: Coding on iPad During Travel
|
||||
|
||||
When traveling with just my iPad, I connect to my VS Code Server to work on my projects. With a Bluetooth keyboard and the amazing iPad screen, it's a surprisingly good experience. The heavy compilation happens on my server back home, so battery life on the iPad remains excellent.
|
||||
|
||||
### Example 2: Pair Programming Sessions
|
||||
|
||||
When helping friends debug issues, I can generate a temporary access link to my VS Code Server:
|
||||
|
||||
```bash
|
||||
# Create a time-limited token
|
||||
TEMP_TOKEN=$(openssl rand -hex 16)
|
||||
echo "Token: $TEMP_TOKEN" > /tmp/temp_token
|
||||
curl -X POST -H "Content-Type: application/json" \
|
||||
-d "{\"token\":\"$TEMP_TOKEN\", \"expiresIn\":\"2h\"}" \
|
||||
http://localhost:8443/api/auth/temporary
|
||||
```
|
||||
|
||||
### Example 3: Switching Between Devices
|
||||
|
||||
I often start coding on my desktop, then switch to my laptop when moving to another room. With VS Code Server, I just close the browser on one device and open it on another - my entire session, including unsaved changes and terminal state, remains intact.
|
||||
|
||||
## Monitoring and Maintaining Your VS Code Server
|
||||
|
||||
To keep your server running smoothly:
|
||||
|
||||
### 1. Set Up Health Checks
|
||||
|
||||
I use Uptime Kuma to monitor my VS Code Server:
|
||||
|
||||
```yaml
|
||||
# Docker Compose snippet for Uptime Kuma
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma:latest
|
||||
container_name: uptime-kuma
|
||||
volumes:
|
||||
- ./uptime-kuma:/app/data
|
||||
ports:
|
||||
- 3001:3001
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
Add a monitor that checks `https://code.yourdomain.com/healthz` endpoint.
|
||||
|
||||
### 2. Regular Backups
|
||||
|
||||
Set up a cron job to back up your VS Code Server configuration:
|
||||
|
||||
```bash
|
||||
# /etc/cron.daily/backup-vscode-server
|
||||
#!/bin/bash
|
||||
tar -czf /backups/vscode-server-$(date +%Y%m%d).tar.gz /path/to/config
|
||||
```
|
||||
|
||||
### 3. Update Script
|
||||
|
||||
Create a script for easy updates:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# update-vscode-server.sh
|
||||
cd /path/to/docker-compose
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
docker system prune -f
|
||||
```
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
Here are solutions to some issues I've encountered:
|
||||
|
||||
### Issue: Extensions Not Installing
|
||||
|
||||
If extensions fail to install, try:
|
||||
|
||||
```bash
|
||||
# Clear the extensions cache
|
||||
rm -rf ~/.vscode-server/extensions/*
|
||||
|
||||
# Restart the server
|
||||
docker-compose restart code-server
|
||||
```
|
||||
|
||||
### Issue: Performance Problems
|
||||
|
||||
If you're experiencing lag:
|
||||
|
||||
```bash
|
||||
# Adjust memory limits in docker-compose.yml
|
||||
services:
|
||||
code-server:
|
||||
# ... other config
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 4G
|
||||
reservations:
|
||||
memory: 1G
|
||||
```
|
||||
|
||||
### Issue: Git Integration Not Working
|
||||
|
||||
For Git authentication issues:
|
||||
|
||||
```bash
|
||||
# Make sure your SSH key is properly set up
|
||||
eval "$(ssh-agent -s)"
|
||||
ssh-add ~/.ssh/id_ed25519
|
||||
|
||||
# Test your connection
|
||||
ssh -T git@github.com
|
||||
```
|
||||
|
||||
## Why I Prefer Self-Hosted Over GitHub Codespaces
|
||||
|
||||
People often ask why I don't just use GitHub Codespaces. Here's my take:
|
||||
|
||||
1. **Cost**: Self-hosted is essentially free (if you already have a server).
|
||||
2. **Privacy**: All my code remains on my hardware.
|
||||
3. **Customization**: Complete control over the environment.
|
||||
4. **Performance**: My server has 64GB RAM and a 12-core CPU - better than most cloud options.
|
||||
5. **Availability**: Works even when GitHub is down or when I have limited internet.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
VS Code Server has truly changed how I approach development. The ability to have a consistent, powerful environment available from any device has increased my productivity and eliminated the friction of context-switching between machines.
|
||||
|
||||
Whether you're a solo developer or part of a team, having a centralized development environment that's accessible from anywhere is incredibly powerful. And the best part? It uses the familiar VS Code interface that millions of developers already know and love.
|
||||
|
||||
Have you tried VS Code Server or similar remote development solutions? I'd love to hear about your setup and experiences in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on March 10, 2024 with information about the latest VS Code Server features and Docker image updates._
|
|
@ -0,0 +1,28 @@
|
|||
#!/bin/bash
|
||||
# Script to handle symbolic links before commit
|
||||
echo "Processing symbolic links for content..."
|
||||
|
||||
declare -A CONTENT_PATHS
|
||||
# src/content directories
|
||||
CONTENT_PATHS["posts"]="src/content/posts"
|
||||
CONTENT_PATHS["projects"]="src/content/projects"
|
||||
CONTENT_PATHS["configurations"]="src/content/configurations"
|
||||
CONTENT_PATHS["external-posts"]="src/content/external-posts"
|
||||
# public/blog directories
|
||||
CONTENT_PATHS["configs"]="public/blog/configs"
|
||||
CONTENT_PATHS["images"]="public/blog/images"
|
||||
CONTENT_PATHS["infrastructure"]="public/blog/infrastructure"
|
||||
CONTENT_PATHS["blog-posts"]="public/blog/posts"
|
||||
|
||||
for dir_name in "${!CONTENT_PATHS[@]}"; do
|
||||
dir_path="${CONTENT_PATHS[$dir_name]}"
|
||||
if [ -L "$dir_path" ]; then
|
||||
echo "Processing $dir_path..."
|
||||
target=$(readlink "$dir_path")
|
||||
rm "$dir_path"
|
||||
mkdir -p "$(dirname "$dir_path")"
|
||||
cp -r "$target" "$dir_path"
|
||||
git add "$dir_path"
|
||||
echo "Processed $dir_path -> $target"
|
||||
fi
|
||||
done
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: "External Posts Collection"
|
||||
description: "A placeholder document for the external-posts collection"
|
||||
pubDate: 2025-04-18
|
||||
category: "External"
|
||||
tags: ["placeholder"]
|
||||
draft: false
|
||||
---
|
||||
|
||||
# External Posts Collection
|
||||
|
||||
This is a placeholder file for the external-posts collection.
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: "Blog Posts Collection"
|
||||
description: "Documentation for blog posts"
|
||||
pubDate: 2025-04-18
|
||||
draft: true
|
||||
---
|
||||
|
||||
# Blog Posts Collection
|
||||
|
||||
This directory contains blog posts for the LaForceIT digital garden.
|
||||
|
||||
## Content Guidelines
|
||||
|
||||
- All posts should include proper frontmatter
|
||||
- Use Markdown for formatting content
|
||||
- Images should be placed in the public/blog/images directory
|
||||
|
||||
## Frontmatter Requirements
|
||||
|
||||
Every post needs at minimum:
|
||||
|
||||
```
|
||||
---
|
||||
title: "Post Title"
|
||||
pubDate: YYYY-MM-DD
|
||||
---
|
||||
```
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
title: Secure Remote Access with Cloudflare Tunnels
|
||||
description: How to set up Cloudflare Tunnels for secure remote access to your home lab services
|
||||
pubDate: Jul 22 2023
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: networking
|
||||
tags:
|
||||
- cloudflare
|
||||
- networking
|
||||
- security
|
||||
- homelab
|
||||
- tunnels
|
||||
readTime: "7 min read"
|
||||
---
|
|
@ -0,0 +1,180 @@
|
|||
---
|
||||
title: Secure Remote Access with Cloudflare Tunnels
|
||||
description: How to set up Cloudflare Tunnels for secure remote access to your home lab services
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: networking
|
||||
tags:
|
||||
- cloudflare
|
||||
- networking
|
||||
- security
|
||||
- homelab
|
||||
- tunnels
|
||||
readTime: 7 min read
|
||||
---
|
||||
|
||||
# Secure Remote Access with Cloudflare Tunnels
|
||||
|
||||
Cloudflare Tunnels provide a secure way to expose your locally hosted applications and services to the internet without opening ports on your firewall or requiring a static IP address. This guide will show you how to set up Cloudflare Tunnels to securely access your home lab services from anywhere.
|
||||
|
||||
## Why Use Cloudflare Tunnels?
|
||||
|
||||
- **Security**: No need to open ports on your firewall
|
||||
- **Simplicity**: Works behind CGNAT, dynamic IPs, and complex network setups
|
||||
- **Performance**: Traffic routed through Cloudflare's global network
|
||||
- **Zero Trust**: Integrate with Cloudflare Access for authentication
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A Cloudflare account
|
||||
- A domain managed by Cloudflare
|
||||
- Docker installed (for containerized deployment)
|
||||
- Services you want to expose (e.g., web apps, SSH, etc.)
|
||||
|
||||
## Setting Up Cloudflare Tunnels
|
||||
|
||||
### 1. Install cloudflared
|
||||
|
||||
You can install cloudflared using Docker:
|
||||
|
||||
```bash
|
||||
docker pull cloudflare/cloudflared:latest
|
||||
```
|
||||
|
||||
Or directly on your system:
|
||||
|
||||
```bash
|
||||
# For Debian/Ubuntu
|
||||
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb -o cloudflared.deb
|
||||
sudo dpkg -i cloudflared.deb
|
||||
|
||||
# For other systems, visit: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/installation
|
||||
```
|
||||
|
||||
### 2. Authenticate cloudflared
|
||||
|
||||
Run the following command to authenticate:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel login
|
||||
```
|
||||
|
||||
This will open a browser window where you'll need to log in to your Cloudflare account and select the domain you want to use with the tunnel.
|
||||
|
||||
### 3. Create a Tunnel
|
||||
|
||||
Create a new tunnel with a meaningful name:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel create homelab
|
||||
```
|
||||
|
||||
This will generate a tunnel ID and credentials file at `~/.cloudflared/`.
|
||||
|
||||
### 4. Configure your Tunnel
|
||||
|
||||
Create a config file at `~/.cloudflared/config.yml`:
|
||||
|
||||
```yaml
|
||||
tunnel: <TUNNEL_ID>
|
||||
credentials-file: /root/.cloudflared/<TUNNEL_ID>.json
|
||||
|
||||
ingress:
|
||||
# Dashboard application
|
||||
- hostname: dashboard.yourdomain.com
|
||||
service: http://localhost:8080
|
||||
|
||||
# Grafana service
|
||||
- hostname: grafana.yourdomain.com
|
||||
service: http://localhost:3000
|
||||
|
||||
# SSH service
|
||||
- hostname: ssh.yourdomain.com
|
||||
service: ssh://localhost:22
|
||||
|
||||
# Catch-all rule, which responds with 404
|
||||
- service: http_status:404
|
||||
```
|
||||
|
||||
### 5. Route Traffic to Your Tunnel
|
||||
|
||||
Configure DNS records to route traffic to your tunnel:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel route dns homelab dashboard.yourdomain.com
|
||||
cloudflared tunnel route dns homelab grafana.yourdomain.com
|
||||
cloudflared tunnel route dns homelab ssh.yourdomain.com
|
||||
```
|
||||
|
||||
### 6. Start the Tunnel
|
||||
|
||||
Run the tunnel:
|
||||
|
||||
```bash
|
||||
cloudflared tunnel run homelab
|
||||
```
|
||||
|
||||
For production deployments, you'll want to set up cloudflared as a service:
|
||||
|
||||
```bash
|
||||
# For systemd-based systems
|
||||
sudo cloudflared service install
|
||||
sudo systemctl start cloudflared
|
||||
```
|
||||
|
||||
## Docker Compose Example
|
||||
|
||||
For a containerized deployment, create a `docker-compose.yml` file:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
cloudflared:
|
||||
image: cloudflare/cloudflared:latest
|
||||
container_name: cloudflared
|
||||
restart: unless-stopped
|
||||
command: tunnel run
|
||||
environment:
|
||||
- TUNNEL_TOKEN=your_tunnel_token
|
||||
volumes:
|
||||
- ~/.cloudflared:/etc/cloudflared
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Store your credentials file safely; it provides full access to your tunnel
|
||||
- Consider using Cloudflare Access for additional authentication
|
||||
- Regularly rotate credentials and update cloudflared
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Zero Trust Access
|
||||
|
||||
You can integrate Cloudflare Tunnels with Cloudflare Access to require authentication:
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
- hostname: dashboard.yourdomain.com
|
||||
service: http://localhost:8080
|
||||
originRequest:
|
||||
noTLSVerify: true
|
||||
```
|
||||
|
||||
Then, create an Access application in the Cloudflare Zero Trust dashboard to protect this hostname.
|
||||
|
||||
### Health Checks
|
||||
|
||||
Configure health checks to ensure your services are running:
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
- hostname: dashboard.yourdomain.com
|
||||
service: http://localhost:8080
|
||||
originRequest:
|
||||
healthCheckEnabled: true
|
||||
healthCheckPath: /health
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Cloudflare Tunnels provide a secure, reliable way to access your home lab services remotely without exposing your home network to the internet. With the setup described in this guide, you can securely access your services from anywhere in the world.
|
|
@ -0,0 +1,206 @@
|
|||
---
|
||||
title: Setting Up FileBrowser for Self-Hosted File Management
|
||||
description: A step-by-step guide to deploying and configuring FileBrowser for secure, user-friendly file management in your home lab environment.
|
||||
pubDate: 2025-04-19
|
||||
updatedDate: 2025-04-18
|
||||
category: Services
|
||||
tags:
|
||||
- filebrowser
|
||||
- self-hosted
|
||||
- kubernetes
|
||||
- docker
|
||||
- file-management
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
I've said it before, and I'll say it again - the journey to a well-organized digital life begins with proper file management. If you're like me, you've got files scattered across multiple devices, cloud services, and servers. What if I told you there's a lightweight, sleek solution that puts you back in control without relying on third-party services?
|
||||
|
||||
Enter [FileBrowser](https://filebrowser.org/), a simple yet powerful self-hosted file management interface that I've been using in my home lab for the past few months. Let me show you how to set it up and some cool ways I'm using it.
|
||||
|
||||
## What is FileBrowser?
|
||||
|
||||
FileBrowser is an open-source, single binary file manager with a clean web interface that lets you:
|
||||
|
||||
- Access and manage files from any device with a browser
|
||||
- Share files with customizable permissions
|
||||
- Edit files directly in the browser
|
||||
- Perform basic file operations (copy, move, delete, upload, download)
|
||||
- Search through your files and folders
|
||||
|
||||
The best part? It's lightweight (< 20MB), written in Go, and runs on pretty much anything - from a Raspberry Pi to your Kubernetes cluster.
|
||||
|
||||
## Getting Started with FileBrowser
|
||||
|
||||
### Option 1: Docker Deployment
|
||||
|
||||
For the Docker enthusiasts (like me), here's how to get FileBrowser up and running in seconds:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name filebrowser \
|
||||
-v /path/to/your/files:/srv \
|
||||
-v /path/to/filebrowser/database:/database \
|
||||
-e PUID=$(id -u) \
|
||||
-e PGID=$(id -g) \
|
||||
-p 8080:80 \
|
||||
filebrowser/filebrowser:latest
|
||||
```
|
||||
|
||||
This will start FileBrowser on port 8080, with your files mounted at `/srv` inside the container.
|
||||
|
||||
### Option 2: Kubernetes Deployment with Helm
|
||||
|
||||
For my fellow Kubernetes fanatics, here's a simple Helm chart deployment:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: filebrowser
|
||||
labels:
|
||||
app: filebrowser
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: filebrowser
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: filebrowser
|
||||
spec:
|
||||
containers:
|
||||
- name: filebrowser
|
||||
image: filebrowser/filebrowser:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /database
|
||||
- name: data
|
||||
mountPath: /srv
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: filebrowser-config
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: filebrowser-data
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: filebrowser
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: filebrowser
|
||||
```
|
||||
|
||||
Don't forget to create the necessary PVCs for your configuration and data.
|
||||
|
||||
## Configuring FileBrowser
|
||||
|
||||
Once you have FileBrowser running, you can access it at `http://your-server:8080`. The default credentials are:
|
||||
|
||||
- Username: `admin`
|
||||
- Password: `admin`
|
||||
|
||||
**Pro tip**: Change these immediately! You can do this through the UI or by using the FileBrowser CLI.
|
||||
|
||||
### Custom Configuration
|
||||
|
||||
You can customize FileBrowser by modifying the configuration file. Here's what my config looks like:
|
||||
|
||||
```json
|
||||
{
|
||||
"port": 80,
|
||||
"baseURL": "",
|
||||
"address": "",
|
||||
"log": "stdout",
|
||||
"database": "/database/filebrowser.db",
|
||||
"root": "/srv",
|
||||
"auth": {
|
||||
"method": "json",
|
||||
"header": ""
|
||||
},
|
||||
"branding": {
|
||||
"name": "LaForceIT Files",
|
||||
"disableExternal": false,
|
||||
"files": "",
|
||||
"theme": "dark"
|
||||
},
|
||||
"cors": {
|
||||
"enabled": false,
|
||||
"credentials": false,
|
||||
"allowedHosts": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Securing FileBrowser
|
||||
|
||||
Security is crucial, especially when hosting a file manager. Here's how I secure my FileBrowser instance:
|
||||
|
||||
1. **Reverse Proxy**: I put FileBrowser behind a reverse proxy (Traefik) with SSL encryption.
|
||||
|
||||
2. **Authentication**: I've integrated with my Authelia setup for SSO across my services.
|
||||
|
||||
3. **User Isolation**: I create separate users with their own root directories to keep things isolated.
|
||||
|
||||
Here's a sample Traefik configuration for FileBrowser:
|
||||
|
||||
```yaml
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: filebrowser
|
||||
namespace: default
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`files.yourdomain.com`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: filebrowser
|
||||
port: 80
|
||||
middlewares:
|
||||
- name: auth-middleware
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
```
|
||||
|
||||
## My Top 5 FileBrowser Use Cases
|
||||
|
||||
1. **Home Media Management**: I organize my photos, music, and video collections.
|
||||
|
||||
2. **Document Repository**: A central place for important documents that I can access from anywhere.
|
||||
|
||||
3. **Code Snippet Library**: I keep commonly used code snippets organized by language and project.
|
||||
|
||||
4. **Backup Verification**: An easy way to browse my automated backups to verify they're working.
|
||||
|
||||
5. **Sharing Files**: When I need to share large files with friends or family, I create a temporary user with limited access.
|
||||
|
||||
## Power User Tips
|
||||
|
||||
Here are some tricks I've learned along the way:
|
||||
|
||||
- **Keyboard Shortcuts**: Press `?` in the UI to see all available shortcuts.
|
||||
- **Custom Branding**: Personalize the look and feel by setting a custom name and logo in the config.
|
||||
- **Multiple Instances**: Run multiple instances for different purposes (e.g., one for media, one for documents).
|
||||
- **Command Runner**: Use the built-in command runner to execute shell scripts on your server.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
FileBrowser has become an essential part of my home lab setup. It's lightweight, fast, and just gets the job done without unnecessary complexity. Whether you're a home lab enthusiast or just looking for a simple way to manage your files, FileBrowser is worth checking out.
|
||||
|
||||
What file management solution are you using? Let me know in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on December 15, 2023 with the latest FileBrowser configuration options and security recommendations._
|
|
@ -0,0 +1,332 @@
|
|||
---
|
||||
title: "Self-Hosting Git with Gitea: Your Own GitHub Alternative"
|
||||
description: A comprehensive guide to setting up Gitea - a lightweight, self-hosted Git service that gives you full control over your code repositories.
|
||||
pubDate: 2025-04-19
|
||||
updatedDate: 2025-04-18
|
||||
category: Services
|
||||
tags:
|
||||
- gitea
|
||||
- git
|
||||
- self-hosted
|
||||
- devops
|
||||
- kubernetes
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
If you're a developer like me who values ownership and privacy, you've probably wondered if there's a way to get the convenience of GitHub or GitLab without handing over your code to a third party. Enter Gitea - a painless, self-hosted Git service written in Go that I've been using for my personal projects for the past year.
|
||||
|
||||
Let me walk you through setting up your own Gitea instance and show you why it might be the perfect addition to your development workflow.
|
||||
|
||||
## Why Gitea?
|
||||
|
||||
First, let's talk about why you might want to run your own Git server:
|
||||
|
||||
- **Complete control**: Your code, your server, your rules.
|
||||
- **Privacy**: Keep sensitive projects completely private.
|
||||
- **No limits**: Create as many private repositories as you want.
|
||||
- **Lightweight**: Gitea runs smoothly on minimal hardware (even a Raspberry Pi).
|
||||
- **GitHub-like experience**: Familiar interface with issues, pull requests, and more.
|
||||
|
||||
I've tried several self-hosted Git solutions, but Gitea strikes the perfect balance between features and simplicity. It's like the Goldilocks of Git servers - not too heavy, not too light, just right.
|
||||
|
||||
## Getting Started with Gitea
|
||||
|
||||
### Option 1: Docker Installation
|
||||
|
||||
The easiest way to get started with Gitea is using Docker. Here's a simple `docker-compose.yml` file to get you up and running:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
gitea:
|
||||
image: gitea/gitea:latest
|
||||
container_name: gitea
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- GITEA__database__DB_TYPE=postgres
|
||||
- GITEA__database__HOST=db:5432
|
||||
- GITEA__database__NAME=gitea
|
||||
- GITEA__database__USER=gitea
|
||||
- GITEA__database__PASSWD=gitea_password
|
||||
restart: always
|
||||
volumes:
|
||||
- ./gitea:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "3000:3000"
|
||||
- "222:22"
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
- gitea
|
||||
|
||||
db:
|
||||
image: postgres:14
|
||||
container_name: gitea-db
|
||||
restart: always
|
||||
environment:
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD=gitea_password
|
||||
- POSTGRES_DB=gitea
|
||||
volumes:
|
||||
- ./postgres:/var/lib/postgresql/data
|
||||
networks:
|
||||
- gitea
|
||||
|
||||
networks:
|
||||
gitea:
|
||||
external: false
|
||||
```
|
||||
|
||||
Save this file and run:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Your Gitea instance will be available at `http://localhost:3000`.
|
||||
|
||||
### Option 2: Kubernetes Deployment
|
||||
|
||||
For those running a Kubernetes cluster (like me), here's a basic manifest to deploy Gitea:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: gitea
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
containers:
|
||||
- name: gitea
|
||||
image: gitea/gitea:latest
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
- containerPort: 22
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
env:
|
||||
- name: USER_UID
|
||||
value: "1000"
|
||||
- name: USER_GID
|
||||
value: "1000"
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: gitea-data
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 3000
|
||||
targetPort: 3000
|
||||
name: web
|
||||
- port: 22
|
||||
targetPort: 22
|
||||
name: ssh
|
||||
selector:
|
||||
app: gitea
|
||||
```
|
||||
|
||||
Apply it with:
|
||||
|
||||
```bash
|
||||
kubectl apply -f gitea.yaml
|
||||
```
|
||||
|
||||
## Initial Configuration
|
||||
|
||||
After installation, you'll be greeted with Gitea's setup page. Here are the settings I recommend:
|
||||
|
||||
1. **Database Settings**: If you followed the Docker Compose example, your database is already configured.
|
||||
|
||||
2. **General Settings**:
|
||||
- Set your site title (e.g., "LaForceIT Git")
|
||||
- Disable user registration unless you're hosting for multiple people
|
||||
- Enable caching to improve performance
|
||||
|
||||
3. **Admin Account**: Create your admin user with a strong password.
|
||||
|
||||
My configuration looks something like this:
|
||||
|
||||
```ini
|
||||
[server]
|
||||
DOMAIN = git.laforce.it
|
||||
SSH_DOMAIN = git.laforce.it
|
||||
ROOT_URL = https://git.laforce.it/
|
||||
DISABLE_SSH = false
|
||||
SSH_PORT = 22
|
||||
|
||||
[service]
|
||||
DISABLE_REGISTRATION = true
|
||||
REQUIRE_SIGNIN_VIEW = true
|
||||
|
||||
[security]
|
||||
INSTALL_LOCK = true
|
||||
```
|
||||
|
||||
## Integrating with Your Development Workflow
|
||||
|
||||
Now that Gitea is running, here's how I integrate it into my workflow:
|
||||
|
||||
### 1. Adding Your SSH Key
|
||||
|
||||
First, add your SSH key to Gitea:
|
||||
|
||||
1. Go to Settings > SSH / GPG Keys
|
||||
2. Click "Add Key"
|
||||
3. Paste your public key and give it a name
|
||||
|
||||
### 2. Creating Your First Repository
|
||||
|
||||
1. Click the "+" button in the top right
|
||||
2. Select "New Repository"
|
||||
3. Fill in the details and initialize with a README if desired
|
||||
|
||||
### 3. Working with Your Repository
|
||||
|
||||
To clone your new repository:
|
||||
|
||||
```bash
|
||||
git clone git@your-gitea-server:username/repo-name.git
|
||||
```
|
||||
|
||||
Now you can work with it just like any Git repository:
|
||||
|
||||
```bash
|
||||
cd repo-name
|
||||
echo "# My awesome project" > README.md
|
||||
git add README.md
|
||||
git commit -m "Update README"
|
||||
git push origin main
|
||||
```
|
||||
|
||||
## Advanced Gitea Features
|
||||
|
||||
Gitea isn't just a basic Git server - it has several powerful features that I use daily:
|
||||
|
||||
### CI/CD with Gitea Actions
|
||||
|
||||
Gitea recently added support for Actions, which are compatible with GitHub Actions workflows. Here's a simple example:
|
||||
|
||||
```yaml
|
||||
name: Go Build
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: '1.20'
|
||||
- name: Build
|
||||
run: go build -v ./...
|
||||
- name: Test
|
||||
run: go test -v ./...
|
||||
```
|
||||
|
||||
### Webhooks for Integration
|
||||
|
||||
I use webhooks to integrate Gitea with my deployment pipeline. Here's how to set up a simple webhook:
|
||||
|
||||
1. Navigate to your repository
|
||||
2. Go to Settings > Webhooks > Add Webhook
|
||||
3. Select "Gitea" or "Custom" depending on your needs
|
||||
4. Enter the URL of your webhook receiver
|
||||
5. Choose which events trigger the webhook
|
||||
|
||||
### Mirror Repositories
|
||||
|
||||
One of my favorite features is repository mirroring. I use this to keep a backup of important GitHub repositories:
|
||||
|
||||
1. Create a new repository
|
||||
2. Go to Settings > Mirror Settings
|
||||
3. Enter the URL of the repository you want to mirror
|
||||
4. Set the sync interval
|
||||
|
||||
## Security Considerations
|
||||
|
||||
When self-hosting any service, security is a top priority. Here's how I secure my Gitea instance:
|
||||
|
||||
1. **Reverse Proxy**: I put Gitea behind Traefik with automatic SSL certificates.
|
||||
|
||||
2. **2FA**: Enable two-factor authentication for your admin account.
|
||||
|
||||
3. **Regular Backups**: I back up both the Gitea data directory and the database daily.
|
||||
|
||||
4. **Updates**: Keep Gitea updated to the latest version to get security fixes.
|
||||
|
||||
Here's a sample Traefik configuration for Gitea:
|
||||
|
||||
```yaml
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: gitea
|
||||
namespace: default
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`git.yourdomain.com`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: gitea
|
||||
port: 3000
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
```
|
||||
|
||||
## Why I Switched from GitHub to Gitea
|
||||
|
||||
People often ask me why I bother with self-hosting when GitHub offers so much for free. Here are my reasons:
|
||||
|
||||
1. **Ownership**: No sudden changes in terms of service affecting my workflow.
|
||||
2. **Privacy**: Some projects aren't meant for public hosting.
|
||||
3. **Learning**: Managing my own services teaches me valuable skills.
|
||||
4. **Integration**: It fits perfectly with my other self-hosted services.
|
||||
5. **Performance**: Local Git operations are lightning-fast.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
Gitea has been a fantastic addition to my self-hosted infrastructure. It's reliable, lightweight, and provides all the features I need without the complexity of larger solutions like GitLab.
|
||||
|
||||
Whether you're a privacy enthusiast, a homelab tinkerer, or just someone who wants complete control over your code, Gitea is worth considering. The setup is straightforward, and the rewards are significant.
|
||||
|
||||
What about you? Are you self-hosting any of your development tools? Let me know in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on January 18, 2024 with information about Gitea Actions and the latest configuration options._
|
|
@ -0,0 +1,169 @@
|
|||
---
|
||||
title: GitOps with Flux CD
|
||||
description: Implementing GitOps workflows on Kubernetes using Flux CD
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: devops
|
||||
tags:
|
||||
- kubernetes
|
||||
- gitops
|
||||
- flux
|
||||
- ci-cd
|
||||
- automation
|
||||
readTime: 10 min read
|
||||
---
|
||||
|
||||
# GitOps with Flux CD
|
||||
|
||||
GitOps is revolutionizing the way teams deploy and manage applications on Kubernetes. This guide will walk you through implementing a GitOps workflow using Flux CD, an open-source continuous delivery tool.
|
||||
|
||||
## What is GitOps?
|
||||
|
||||
GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation.
|
||||
|
||||
With GitOps:
|
||||
- Git is the single source of truth for the desired state of your infrastructure
|
||||
- Changes to the desired state are declarative and version controlled
|
||||
- Approved changes are automatically applied to your infrastructure
|
||||
|
||||
## Why Flux CD?
|
||||
|
||||
Flux CD is a GitOps tool that ensures that your Kubernetes cluster matches the desired state specified in a Git repository. Key features include:
|
||||
|
||||
- Automated sync between your Git repository and cluster state
|
||||
- Support for Kustomize, Helm, and plain Kubernetes manifests
|
||||
- Multi-tenancy via RBAC
|
||||
- Strong security practices, including image verification
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- A Kubernetes cluster (K3s, Kind, or any other distribution)
|
||||
- kubectl configured to access your cluster
|
||||
- A GitHub (or GitLab/Bitbucket) account and repository
|
||||
|
||||
### Installing Flux
|
||||
|
||||
1. Install the Flux CLI:
|
||||
|
||||
```bash
|
||||
curl -s https://fluxcd.io/install.sh | sudo bash
|
||||
```
|
||||
|
||||
2. Export your GitHub personal access token:
|
||||
|
||||
```bash
|
||||
export GITHUB_TOKEN=<your-token>
|
||||
```
|
||||
|
||||
3. Bootstrap Flux:
|
||||
|
||||
```bash
|
||||
flux bootstrap github \
|
||||
--owner=<your-github-username> \
|
||||
--repository=<repository-name> \
|
||||
--path=clusters/my-cluster \
|
||||
--personal
|
||||
```
|
||||
|
||||
## Setting Up Your First Application
|
||||
|
||||
1. Create a basic directory structure in your Git repository:
|
||||
|
||||
```
|
||||
└── clusters/
|
||||
└── my-cluster/
|
||||
├── flux-system/ # Created by bootstrap
|
||||
└── apps/
|
||||
└── podinfo/
|
||||
├── namespace.yaml
|
||||
├── deployment.yaml
|
||||
└── service.yaml
|
||||
```
|
||||
|
||||
2. Create a Flux Kustomization to deploy your app:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 5m0s
|
||||
path: ./clusters/my-cluster/apps/podinfo
|
||||
prune: true
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
```
|
||||
|
||||
3. Commit and push your changes, and Flux will automatically deploy your application!
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Automated Image Updates
|
||||
|
||||
Flux can automatically update your deployments when new images are available:
|
||||
|
||||
```yaml
|
||||
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
||||
kind: ImageRepository
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: flux-system
|
||||
spec:
|
||||
image: ghcr.io/stefanprodan/podinfo
|
||||
interval: 1m0s
|
||||
---
|
||||
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
||||
kind: ImagePolicy
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: flux-system
|
||||
spec:
|
||||
imageRepositoryRef:
|
||||
name: podinfo
|
||||
policy:
|
||||
semver:
|
||||
range: 6.x.x
|
||||
```
|
||||
|
||||
### Working with Helm Charts
|
||||
|
||||
Flux makes it easy to manage Helm releases:
|
||||
|
||||
```yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: HelmRepository
|
||||
metadata:
|
||||
name: bitnami
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 30m
|
||||
url: https://charts.bitnami.com/bitnami
|
||||
---
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: redis
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 5m
|
||||
chart:
|
||||
spec:
|
||||
chart: redis
|
||||
version: "16.x"
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: bitnami
|
||||
values:
|
||||
architecture: standalone
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Flux CD provides a powerful, secure, and flexible platform for implementing GitOps workflows. By following this guide, you'll be well on your way to managing your Kubernetes infrastructure using GitOps principles.
|
||||
|
||||
Stay tuned for more advanced GitOps patterns and best practices!
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Setting Up a K3s Kubernetes Cluster
|
||||
pubDate: 2025-04-19
|
||||
description: A comprehensive guide to setting up and configuring a lightweight K3s Kubernetes cluster for your home lab
|
||||
category: Infrastructure
|
||||
tags:
|
||||
- kubernetes
|
||||
- k3s
|
||||
- infrastructure
|
||||
- homelab
|
||||
- containers
|
||||
draft: false
|
||||
heroImage:
|
||||
---
|
||||
|
||||
# Setting Up a K3s Kubernetes Cluster
|
||||
|
||||
Coming soon...
|
|
@ -0,0 +1,90 @@
|
|||
---
|
||||
title: K3s Installation Guide
|
||||
description: A comprehensive guide to installing and configuring K3s for your home lab
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/k3installation.png
|
||||
category: kubernetes
|
||||
tags:
|
||||
- kubernetes
|
||||
- k3s
|
||||
- homelab
|
||||
- tutorial
|
||||
readTime: 8 min read
|
||||
---
|
||||
|
||||
# K3s Installation Guide
|
||||
|
||||
K3s is a lightweight Kubernetes distribution designed for resource-constrained environments, perfect for home labs and edge computing. This guide will walk you through the installation process from start to finish.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A machine running Linux (Ubuntu 20.04+ recommended)
|
||||
- Minimum 1GB RAM (2GB+ recommended for multi-workload clusters)
|
||||
- 20GB+ of disk space
|
||||
- Root access or sudo privileges
|
||||
|
||||
## Basic Installation
|
||||
|
||||
The simplest way to install K3s is using the installation script:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
This will install K3s as a server and start the service. To verify it's running:
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl get node
|
||||
```
|
||||
|
||||
## Advanced Installation Options
|
||||
|
||||
### Master Node with Custom Options
|
||||
|
||||
For more control, you can use environment variables:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
|
||||
```
|
||||
|
||||
### Adding Worker Nodes
|
||||
|
||||
1. On your master node, get the node token:
|
||||
|
||||
```bash
|
||||
sudo cat /var/lib/rancher/k3s/server/node-token
|
||||
```
|
||||
|
||||
2. On each worker node, run:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://<master-node-ip>:6443 K3S_TOKEN=<node-token> sh -
|
||||
```
|
||||
|
||||
## Accessing the Cluster
|
||||
|
||||
After installation, the kubeconfig file is written to `/etc/rancher/k3s/k3s.yaml`. To use kubectl externally:
|
||||
|
||||
1. Copy this file to your local machine
|
||||
2. Set the KUBECONFIG environment variable:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=/path/to/k3s.yaml
|
||||
```
|
||||
|
||||
## Uninstalling K3s
|
||||
|
||||
If you need to uninstall:
|
||||
|
||||
- On server nodes: `/usr/local/bin/k3s-uninstall.sh`
|
||||
- On agent nodes: `/usr/local/bin/k3s-agent-uninstall.sh`
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you have K3s running, you might want to:
|
||||
|
||||
- Deploy your first application
|
||||
- Set up persistent storage
|
||||
- Configure ingress for external access
|
||||
|
||||
Stay tuned for more guides on these topics!
|
|
@ -0,0 +1,289 @@
|
|||
---
|
||||
title: Setting Up Prometheus Monitoring in Kubernetes
|
||||
description: A comprehensive guide to implementing Prometheus monitoring in your Kubernetes cluster
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: devops
|
||||
tags:
|
||||
- kubernetes
|
||||
- monitoring
|
||||
- prometheus
|
||||
- grafana
|
||||
- observability
|
||||
readTime: 9 min read
|
||||
---
|
||||
|
||||
# Setting Up Prometheus Monitoring in Kubernetes
|
||||
|
||||
Effective monitoring is crucial for maintaining a healthy Kubernetes environment. Prometheus has become the de facto standard for metrics collection and alerting in cloud-native environments. This guide will walk you through setting up a complete Prometheus monitoring stack in your Kubernetes cluster.
|
||||
|
||||
## Why Prometheus?
|
||||
|
||||
Prometheus offers several advantages for Kubernetes monitoring:
|
||||
|
||||
- **Pull-based architecture**: Simplifies configuration and security
|
||||
- **Powerful query language (PromQL)**: For flexible data analysis
|
||||
- **Service discovery**: Automatically finds targets in dynamic environments
|
||||
- **Rich ecosystem**: Wide range of exporters and integrations
|
||||
- **CNCF graduated project**: Strong community and vendor support
|
||||
|
||||
## Components of the Monitoring Stack
|
||||
|
||||
We'll set up a complete monitoring stack consisting of:
|
||||
|
||||
1. **Prometheus**: Core metrics collection and storage
|
||||
2. **Alertmanager**: Handles alerts and notifications
|
||||
3. **Grafana**: Visualization and dashboards
|
||||
4. **Node Exporter**: Collects host-level metrics
|
||||
5. **kube-state-metrics**: Collects Kubernetes state metrics
|
||||
6. **Prometheus Operator**: Simplifies Prometheus management in Kubernetes
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A running Kubernetes cluster (K3s, EKS, GKE, etc.)
|
||||
- kubectl configured to access your cluster
|
||||
- Helm 3 installed
|
||||
|
||||
## Installation Using Helm
|
||||
|
||||
The easiest way to deploy Prometheus is using the kube-prometheus-stack Helm chart, which includes all the components mentioned above.
|
||||
|
||||
### 1. Add the Prometheus Community Helm Repository
|
||||
|
||||
```bash
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
### 2. Create a Namespace for Monitoring
|
||||
|
||||
```bash
|
||||
kubectl create namespace monitoring
|
||||
```
|
||||
|
||||
### 3. Configure Values
|
||||
|
||||
Create a `values.yaml` file with your custom configuration:
|
||||
|
||||
```yaml
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
retention: 15d
|
||||
resources:
|
||||
requests:
|
||||
memory: 256Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 2Gi
|
||||
cpu: 500m
|
||||
storageSpec:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: standard
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
|
||||
alertmanager:
|
||||
alertmanagerSpec:
|
||||
storage:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: standard
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
|
||||
grafana:
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClassName: standard
|
||||
size: 10Gi
|
||||
adminPassword: "prom-operator" # Change this!
|
||||
|
||||
nodeExporter:
|
||||
enabled: true
|
||||
|
||||
kubeStateMetrics:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
### 4. Install the Helm Chart
|
||||
|
||||
```bash
|
||||
helm install prometheus prometheus-community/kube-prometheus-stack \
|
||||
--namespace monitoring \
|
||||
--values values.yaml
|
||||
```
|
||||
|
||||
### 5. Verify the Installation
|
||||
|
||||
Check that all the pods are running:
|
||||
|
||||
```bash
|
||||
kubectl get pods -n monitoring
|
||||
```
|
||||
|
||||
## Accessing the UIs
|
||||
|
||||
By default, the components don't have external access. You can use port-forwarding to access them:
|
||||
|
||||
### Prometheus UI
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n monitoring svc/prometheus-operated 9090:9090
|
||||
```
|
||||
|
||||
Then access Prometheus at http://localhost:9090
|
||||
|
||||
### Grafana
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
|
||||
```
|
||||
|
||||
Then access Grafana at http://localhost:3000 (default credentials: admin/prom-operator)
|
||||
|
||||
### Alertmanager
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n monitoring svc/prometheus-alertmanager 9093:9093
|
||||
```
|
||||
|
||||
Then access Alertmanager at http://localhost:9093
|
||||
|
||||
## For Production: Exposing Services
|
||||
|
||||
For production environments, you'll want to set up proper ingress. Here's an example using a basic Ingress resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: prometheus-ingress
|
||||
namespace: monitoring
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/ssl-redirect: "true"
|
||||
spec:
|
||||
rules:
|
||||
- host: prometheus.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: prometheus-operated
|
||||
port:
|
||||
number: 9090
|
||||
- host: grafana.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: prometheus-grafana
|
||||
port:
|
||||
number: 80
|
||||
- host: alertmanager.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: prometheus-alertmanager
|
||||
port:
|
||||
number: 9093
|
||||
```
|
||||
|
||||
## Configuring Alerting
|
||||
|
||||
### 1. Set Up Alert Rules
|
||||
|
||||
Alert rules can be created using the PrometheusRule custom resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: node-alerts
|
||||
namespace: monitoring
|
||||
labels:
|
||||
release: prometheus
|
||||
spec:
|
||||
groups:
|
||||
- name: node.rules
|
||||
rules:
|
||||
- alert: HighNodeCPU
|
||||
expr: instance:node_cpu_utilisation:rate1m > 0.8
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "High CPU usage on {{ $labels.instance }}"
|
||||
description: "CPU usage is above 80% for 5 minutes on node {{ $labels.instance }}"
|
||||
```
|
||||
|
||||
### 2. Configure Alert Receivers
|
||||
|
||||
Configure Alertmanager to send notifications by creating a Secret with your configuration:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: alertmanager-prometheus-alertmanager
|
||||
namespace: monitoring
|
||||
stringData:
|
||||
alertmanager.yaml: |
|
||||
global:
|
||||
resolve_timeout: 5m
|
||||
slack_api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
|
||||
|
||||
route:
|
||||
group_by: ['job', 'alertname', 'namespace']
|
||||
group_wait: 30s
|
||||
group_interval: 5m
|
||||
repeat_interval: 12h
|
||||
receiver: 'slack-notifications'
|
||||
routes:
|
||||
- receiver: 'slack-notifications'
|
||||
matchers:
|
||||
- severity =~ "warning|critical"
|
||||
|
||||
receivers:
|
||||
- name: 'slack-notifications'
|
||||
slack_configs:
|
||||
- channel: '#alerts'
|
||||
send_resolved: true
|
||||
title: '{{ template "slack.default.title" . }}'
|
||||
text: '{{ template "slack.default.text" . }}'
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
## Custom Dashboards
|
||||
|
||||
Grafana comes pre-configured with several useful dashboards, but you can import more from [Grafana.com](https://grafana.com/grafana/dashboards/).
|
||||
|
||||
Some recommended dashboard IDs to import:
|
||||
- 1860: Node Exporter Full
|
||||
- 12740: Kubernetes Monitoring
|
||||
- 13332: Prometheus Stats
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Insufficient Resources**: Prometheus can be resource-intensive. Adjust resource limits if pods are being OOMKilled.
|
||||
2. **Storage Issues**: Ensure your storage class supports the access modes you've configured.
|
||||
3. **ServiceMonitor not working**: Check that the label selectors match your services.
|
||||
|
||||
## Conclusion
|
||||
|
||||
You now have a fully functional Prometheus monitoring stack for your Kubernetes cluster. This setup provides comprehensive metrics collection, visualization, and alerting capabilities essential for maintaining a healthy and performant cluster.
|
||||
|
||||
In future articles, we'll explore advanced topics like custom exporters, recording rules for performance, and integrating with other observability tools like Loki for logs and Tempo for traces.
|
|
@ -0,0 +1,191 @@
|
|||
---
|
||||
title: Complete Proxmox VE Setup Guide
|
||||
description: A step-by-step guide to setting up Proxmox VE for your home lab virtualization needs
|
||||
pubDate: 2025-04-19
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
category: infrastructure
|
||||
tags:
|
||||
- proxmox
|
||||
- virtualization
|
||||
- homelab
|
||||
- infrastructure
|
||||
readTime: 12 min read
|
||||
---
|
||||
|
||||
# Complete Proxmox VE Setup Guide
|
||||
|
||||
Proxmox Virtual Environment (VE) is a complete open-source server management platform for enterprise virtualization. It tightly integrates KVM hypervisor and LXC containers, software-defined storage and networking functionality, on a single platform. This guide will walk you through installing and configuring Proxmox VE for your home lab.
|
||||
|
||||
## Why Choose Proxmox VE?
|
||||
|
||||
- **Open Source**: Free to use with optional paid enterprise support
|
||||
- **Full-featured**: Combines KVM hypervisor and LXC containers
|
||||
- **Web Interface**: Easy-to-use management interface
|
||||
- **Clustering**: Built-in high availability features
|
||||
- **Storage Flexibility**: Support for local, SAN, NFS, Ceph, and more
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
- 64-bit CPU with virtualization extensions (Intel VT-x/AMD-V)
|
||||
- At least 2GB RAM (8GB+ recommended)
|
||||
- Hard drive for OS installation (SSD recommended)
|
||||
- Additional storage for VMs and containers
|
||||
- Network interface card
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Prepare for Installation
|
||||
|
||||
1. Download the Proxmox VE ISO from [proxmox.com/downloads](https://www.proxmox.com/downloads)
|
||||
2. Create a bootable USB drive using tools like Rufus, Etcher, or dd
|
||||
3. Ensure virtualization is enabled in your BIOS/UEFI
|
||||
|
||||
### 2. Install Proxmox VE
|
||||
|
||||
1. Boot from the USB drive
|
||||
2. Select "Install Proxmox VE"
|
||||
3. Accept the EULA
|
||||
4. Select the target hard drive (this will erase all data on the drive)
|
||||
5. Configure country, time zone, and keyboard layout
|
||||
6. Set a strong root password and provide an email address
|
||||
7. Configure network settings:
|
||||
- Enter a hostname (FQDN format: proxmox.yourdomain.local)
|
||||
- IP address, netmask, gateway
|
||||
- DNS server
|
||||
8. Review the settings and confirm to start the installation
|
||||
9. Once completed, remove the USB drive and reboot
|
||||
|
||||
### 3. Initial Configuration
|
||||
|
||||
Access the web interface by navigating to `https://<your-proxmox-ip>:8006` in your browser. Log in with:
|
||||
- Username: root
|
||||
- Password: (the one you set during installation)
|
||||
|
||||
## Post-Installation Tasks
|
||||
|
||||
### 1. Update Proxmox VE
|
||||
|
||||
```bash
|
||||
apt update
|
||||
apt dist-upgrade
|
||||
```
|
||||
|
||||
### 2. Remove Subscription Notice (Optional)
|
||||
|
||||
For home lab use, you can remove the subscription notice:
|
||||
|
||||
```bash
|
||||
echo "DPkg::Post-Invoke { \"dpkg -V proxmox-widget-toolkit | grep -q '/proxmoxlib\.js$'; if [ \$? -eq 1 ]; then { echo 'Removing subscription nag from UI...'; sed -i '/.*data\.status.*subscription.*/d' /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js; }; fi\"; };" > /etc/apt/apt.conf.d/no-subscription-warning
|
||||
```
|
||||
|
||||
### 3. Configure Storage
|
||||
|
||||
#### Local Storage
|
||||
|
||||
By default, Proxmox VE creates several storage locations:
|
||||
|
||||
- **local**: For ISO images, container templates, and snippets
|
||||
- **local-lvm**: For VM disk images
|
||||
|
||||
To add a new storage, go to Datacenter > Storage > Add:
|
||||
|
||||
- For local directories: Select "Directory"
|
||||
- For network storage: Select "NFS" or "CIFS"
|
||||
- For block storage: Select "LVM", "LVM-Thin", or "ZFS"
|
||||
|
||||
#### ZFS Storage Pool (Recommended)
|
||||
|
||||
ZFS offers excellent performance and data protection:
|
||||
|
||||
```bash
|
||||
# Create a ZFS pool using available disks
|
||||
zpool create -f rpool /dev/sdb /dev/sdc
|
||||
|
||||
# Add the pool to Proxmox
|
||||
pvesm add zfspool rpool -pool rpool
|
||||
```
|
||||
|
||||
### 4. Set Up Networking
|
||||
|
||||
#### Network Bridges
|
||||
|
||||
Proxmox VE creates a default bridge (vmbr0) during installation. To add more:
|
||||
|
||||
1. Go to Node > Network > Create > Linux Bridge
|
||||
2. Configure the bridge:
|
||||
- Name: vmbr1
|
||||
- IP address/CIDR: 192.168.1.1/24 (or leave empty for unmanaged bridge)
|
||||
- Bridge ports: (physical interface, e.g., eth1)
|
||||
|
||||
#### VLAN Configuration
|
||||
|
||||
For VLAN support:
|
||||
|
||||
1. Ensure the bridge has VLAN awareness enabled
|
||||
2. In VM network settings, specify VLAN tags
|
||||
|
||||
## Creating Virtual Machines and Containers
|
||||
|
||||
### Virtual Machines (KVM)
|
||||
|
||||
1. Go to Create VM
|
||||
2. Fill out the wizard:
|
||||
- General: Name, Resource Pool
|
||||
- OS: ISO image, type, and version
|
||||
- System: BIOS/UEFI, Machine type
|
||||
- Disks: Size, format, storage location
|
||||
- CPU: Cores, type
|
||||
- Memory: RAM size
|
||||
- Network: Bridge, model
|
||||
3. Click Finish to create the VM
|
||||
|
||||
### Containers (LXC)
|
||||
|
||||
1. Go to Create CT
|
||||
2. Fill out the wizard:
|
||||
- General: Hostname, Password
|
||||
- Template: Select from available templates
|
||||
- Disks: Size, storage location
|
||||
- CPU: Cores
|
||||
- Memory: RAM size
|
||||
- Network: IP address, bridge
|
||||
- DNS: DNS servers
|
||||
3. Click Finish to create the container
|
||||
|
||||
## Backup Configuration
|
||||
|
||||
### Setting Up Backups
|
||||
|
||||
1. Go to Datacenter > Backup
|
||||
2. Add a new backup job:
|
||||
- Select storage location
|
||||
- Set schedule (daily, weekly, etc.)
|
||||
- Choose VMs/containers to back up
|
||||
- Configure compression and mode
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### CPU
|
||||
|
||||
For VMs that need consistent performance:
|
||||
|
||||
- Set CPU type to "host" for best performance
|
||||
- Reserve CPU cores for critical VMs
|
||||
- Use CPU pinning for high-performance workloads
|
||||
|
||||
### Memory
|
||||
|
||||
- Enable KSM (Kernel Same-page Merging) for better memory usage
|
||||
- Set appropriate memory ballooning for VMs
|
||||
|
||||
### Storage
|
||||
|
||||
- Use SSDs for VM disks when possible
|
||||
- Enable write-back caching for improved performance
|
||||
- Consider ZFS for important data with appropriate RAM allocation
|
||||
|
||||
## Conclusion
|
||||
|
||||
Proxmox VE is a powerful, flexible virtualization platform perfect for home labs. With its combination of virtual machines and containers, you can build a versatile lab environment for testing, development, and running production services.
|
||||
|
||||
After following this guide, you should have a fully functional Proxmox VE server ready to host your virtual infrastructure. In future articles, we'll explore advanced topics like clustering, high availability, and integration with other infrastructure tools.
|
|
@ -0,0 +1,23 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Script to push blog content to Gitea with proper symlink handling
|
||||
# This ensures actual content is pushed, not just symlinks
|
||||
|
||||
# Navigate to the blog repository
|
||||
cd ~/Projects/laforceit-blog
|
||||
|
||||
# Configure git to handle symlinks correctly
|
||||
git config --local core.symlinks true
|
||||
|
||||
# Add all changes, dereferencing symlinks to include the actual content
|
||||
git add -A --dereference
|
||||
|
||||
# Get commit message from argument or use default
|
||||
COMMIT_MSG=${1:-"Update blog content"}
|
||||
|
||||
# Commit changes
|
||||
git commit -m "$COMMIT_MSG"
|
||||
|
||||
# Push to the repository
|
||||
git push origin main
|
||||
|
|
@ -0,0 +1,416 @@
|
|||
---
|
||||
title: Building a Digital Garden with Quartz, Obsidian, and Astro
|
||||
description: How to transform your Obsidian notes into a beautiful, connected public digital garden using Quartz and Astro
|
||||
pubDate: 2025-04-18
|
||||
updatedDate: 2025-04-19
|
||||
category: Services
|
||||
tags:
|
||||
- quartz
|
||||
- obsidian
|
||||
- digital-garden
|
||||
- knowledge-management
|
||||
- astro
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
I've been taking digital notes for decades now. From simple `.txt` files to OneNote, Evernote, Notion, and now Obsidian. But for years, I've been wrestling with a question: how do I share my knowledge with others in a way that preserves the connections between ideas?
|
||||
|
||||
Enter [Quartz](https://quartz.jzhao.xyz/) - an open-source static site generator designed specifically for transforming Obsidian vaults into interconnected digital gardens. I've been using it with Astro to power this very blog, and today I want to show you how you can do the same.
|
||||
|
||||
## What is a Digital Garden?
|
||||
|
||||
Before we dive into the technical stuff, let's talk about what a digital garden actually is. Unlike traditional blogs organized chronologically, a digital garden is more like... well, a garden! It's a collection of interconnected notes that grow over time.
|
||||
|
||||
Think of each note as a plant. Some are seedlings (early ideas), some are in full bloom (well-developed thoughts), and others might be somewhere in between. The beauty of a digital garden is that it evolves organically, with connections forming between ideas just as they do in our brains.
|
||||
|
||||
## Why Quartz + Obsidian + Astro?
|
||||
|
||||
I settled on this stack after trying numerous solutions, and here's why:
|
||||
|
||||
- **Obsidian**: Provides a fantastic editing experience with bi-directional linking, graph views, and markdown support.
|
||||
- **Quartz**: Transforms Obsidian vaults into interconnected websites with minimal configuration.
|
||||
- **Astro**: Adds flexibility for custom layouts, components, and integrations not available in Quartz alone.
|
||||
|
||||
It's a match made in heaven - I get the best note-taking experience with Obsidian, the connection-preserving features of Quartz, and the full power of a modern web framework with Astro.
|
||||
|
||||
## Setting Up Your Digital Garden
|
||||
|
||||
Let's walk through how to set this up step by step.
|
||||
|
||||
### Step 1: Install Obsidian and Create a Vault
|
||||
|
||||
If you haven't already, [download Obsidian](https://obsidian.md/) and create a new vault for your digital garden content. I recommend having a separate vault for public content to keep it clean.
|
||||
|
||||
### Step 2: Set Up Quartz
|
||||
|
||||
Quartz is essentially a template for your Obsidian content. Here's how to get it running:
|
||||
|
||||
```bash
|
||||
# Clone the Quartz repository
|
||||
git clone https://github.com/jackyzha0/quartz.git
|
||||
cd quartz
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Copy your Obsidian vault content to the content folder
|
||||
# (or symlink it, which is what I do)
|
||||
ln -s /path/to/your/obsidian/vault content
|
||||
```
|
||||
|
||||
Once installed, you can customize your Quartz configuration in the `quartz.config.ts` file. Here's a snippet of mine:
|
||||
|
||||
```typescript
|
||||
// quartz.config.ts
|
||||
const config: QuartzConfig = {
|
||||
configuration: {
|
||||
pageTitle: "LaForceIT Digital Garden",
|
||||
enableSPA: true,
|
||||
enablePopovers: true,
|
||||
analytics: {
|
||||
provider: "plausible",
|
||||
},
|
||||
baseUrl: "blog.laforceit.com",
|
||||
ignorePatterns: ["private", "templates", ".obsidian"],
|
||||
theme: {
|
||||
typography: {
|
||||
header: "Space Grotesk",
|
||||
body: "Space Grotesk",
|
||||
code: "JetBrains Mono",
|
||||
},
|
||||
colors: {
|
||||
lightMode: {
|
||||
light: "#fafafa",
|
||||
lightgray: "#e5e5e5",
|
||||
gray: "#b8b8b8",
|
||||
darkgray: "#4e4e4e",
|
||||
dark: "#2b2b2b",
|
||||
secondary: "#284b63",
|
||||
tertiary: "#84a59d",
|
||||
highlight: "rgba(143, 159, 169, 0.15)",
|
||||
},
|
||||
darkMode: {
|
||||
light: "#050a18",
|
||||
lightgray: "#0f172a",
|
||||
gray: "#1e293b",
|
||||
darkgray: "#94a3b8",
|
||||
dark: "#e2e8f0",
|
||||
secondary: "#06b6d4",
|
||||
tertiary: "#3b82f6",
|
||||
highlight: "rgba(6, 182, 212, 0.15)",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
plugins: {
|
||||
transformers: [
|
||||
// plugins here
|
||||
],
|
||||
filters: [
|
||||
// filters here
|
||||
],
|
||||
emitters: [
|
||||
// emitters here
|
||||
],
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Step 3: Integrating with Astro
|
||||
|
||||
Now, where Quartz ends and Astro begins is where the magic really happens. Here's how I integrated them:
|
||||
|
||||
1. Create a new Astro project:
|
||||
|
||||
```bash
|
||||
npm create astro@latest my-digital-garden
|
||||
cd my-digital-garden
|
||||
```
|
||||
|
||||
2. Set up your Astro project structure:
|
||||
|
||||
```
|
||||
my-digital-garden/
|
||||
├── public/
|
||||
├── src/
|
||||
│ ├── components/
|
||||
│ ├── layouts/
|
||||
│ ├── pages/
|
||||
│ │ └── index.astro
|
||||
│ └── content/ (this will point to your Quartz content)
|
||||
├── astro.config.mjs
|
||||
└── package.json
|
||||
```
|
||||
|
||||
3. Configure Astro to work with Quartz's output:
|
||||
|
||||
```typescript
|
||||
// astro.config.mjs
|
||||
import { defineConfig } from 'astro/config';
|
||||
|
||||
export default defineConfig({
|
||||
integrations: [
|
||||
// Your integrations here
|
||||
],
|
||||
markdown: {
|
||||
remarkPlugins: [
|
||||
// The same remark plugins used in Quartz
|
||||
],
|
||||
rehypePlugins: [
|
||||
// The same rehype plugins used in Quartz
|
||||
]
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
4. Create a component for the Obsidian graph view (similar to what I use on this blog):
|
||||
|
||||
```astro
|
||||
---
|
||||
// Graph.astro
|
||||
---
|
||||
|
||||
<div class="graph-container">
|
||||
<div class="graph-visualization">
|
||||
<div class="graph-overlay"></div>
|
||||
<div class="graph-nodes" id="obsidian-graph"></div>
|
||||
</div>
|
||||
<div class="graph-caption">Interactive visualization of interconnected notes</div>
|
||||
</div>
|
||||
|
||||
<script src="/blog/scripts/neural-network.js" defer></script>
|
||||
|
||||
<style>
|
||||
.graph-container {
|
||||
width: 100%;
|
||||
height: 400px;
|
||||
position: relative;
|
||||
border-radius: 0.75rem;
|
||||
overflow: hidden;
|
||||
background: rgba(15, 23, 42, 0.6);
|
||||
border: 1px solid rgba(56, 189, 248, 0.2);
|
||||
}
|
||||
|
||||
.graph-visualization {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.graph-overlay {
|
||||
position: absolute;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
background: radial-gradient(
|
||||
circle at center,
|
||||
transparent 0%,
|
||||
rgba(15, 23, 42, 0.5) 100%
|
||||
);
|
||||
z-index: 1;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.graph-nodes {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
position: relative;
|
||||
z-index: 2;
|
||||
}
|
||||
|
||||
.graph-caption {
|
||||
position: absolute;
|
||||
bottom: 0.75rem;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
text-align: center;
|
||||
color: rgba(148, 163, 184, 0.8);
|
||||
font-size: 0.875rem;
|
||||
z-index: 3;
|
||||
}
|
||||
</style>
|
||||
```
|
||||
|
||||
## Writing Content for Your Digital Garden
|
||||
|
||||
Now that you have the technical setup, let's talk about how to structure your content. Here's how I approach it:
|
||||
|
||||
### 1. Use Meaningful Frontmatter
|
||||
|
||||
Each note should have frontmatter that helps organize and categorize it:
|
||||
|
||||
```markdown
|
||||
---
|
||||
title: "Building a Digital Garden"
|
||||
description: "How to create a connected knowledge base with Obsidian and Quartz"
|
||||
pubDate: 2023-11-05
|
||||
updatedDate: 2024-02-10
|
||||
category: "Knowledge Management"
|
||||
tags: ["digital-garden", "obsidian", "quartz", "notes"]
|
||||
---
|
||||
```
|
||||
|
||||
### 2. Create Meaningful Connections
|
||||
|
||||
The power of a digital garden comes from connections. Use Obsidian's `[[wiki-links]]` liberally to connect related concepts:
|
||||
|
||||
```markdown
|
||||
I'm using the [[quartz-digital-garden|Quartz framework]] to power my digital garden. It works well with [[obsidian-setup|my Obsidian workflow]].
|
||||
```
|
||||
|
||||
### 3. Use Consistent Structure for Your Notes
|
||||
|
||||
I follow a template for most of my notes to maintain consistency:
|
||||
|
||||
- Brief introduction/definition
|
||||
- Why it matters
|
||||
- Key concepts
|
||||
- Examples or code snippets
|
||||
- References to related notes
|
||||
- External resources
|
||||
|
||||
### 4. Leverage Obsidian Features
|
||||
|
||||
Make use of Obsidian's unique features that Quartz preserves:
|
||||
|
||||
- **Callouts** for highlighting important information
|
||||
- **Dataview** for creating dynamic content (if using the Dataview plugin)
|
||||
- **Graphs** to visualize connections
|
||||
|
||||
## Deploying Your Digital Garden
|
||||
|
||||
Here's how I deploy my digital garden to Cloudflare Pages:
|
||||
|
||||
1. **Build Configuration**:
|
||||
|
||||
```javascript
|
||||
// Build command
|
||||
astro build
|
||||
|
||||
// Output directory
|
||||
dist
|
||||
```
|
||||
|
||||
2. **Automated Deployment from Git**:
|
||||
|
||||
I have a GitHub action that publishes my content whenever I push changes:
|
||||
|
||||
```yaml
|
||||
name: Deploy to Cloudflare Pages
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
submodules: true
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: 18
|
||||
- name: Install Dependencies
|
||||
run: npm ci
|
||||
- name: Build
|
||||
run: npm run build
|
||||
- name: Deploy to Cloudflare Pages
|
||||
uses: cloudflare/pages-action@v1
|
||||
with:
|
||||
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
|
||||
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
|
||||
projectName: digital-garden
|
||||
directory: dist
|
||||
```
|
||||
|
||||
## My Digital Garden Workflow
|
||||
|
||||
Here's my actual workflow for maintaining my digital garden:
|
||||
|
||||
1. **Daily note-taking** in Obsidian (private vault)
|
||||
2. **Weekly review** where I refine notes and connections
|
||||
3. **Publishing prep** where I move polished notes to my public vault
|
||||
4. **Git commit and push** which triggers the deployment
|
||||
|
||||
The beauty of this system is that my private thinking and public sharing use the same tools and formats, reducing friction between capturing thoughts and sharing them.
|
||||
|
||||
## Adding Interactive Elements
|
||||
|
||||
One of my favorite parts of using Astro with Quartz is that I can add interactive elements to my digital garden. For example, the neural graph visualization you see on this blog:
|
||||
|
||||
```javascript
|
||||
// Simplified version of the neural network graph code
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
const container = document.querySelector('.graph-nodes');
|
||||
if (!container) return;
|
||||
|
||||
// Configuration
|
||||
const CONNECTION_DISTANCE = 30;
|
||||
const MIN_NODE_SIZE = 4;
|
||||
const MAX_NODE_SIZE = 12;
|
||||
|
||||
// Fetch blog data from the same origin
|
||||
async function fetchQuartzData() {
|
||||
// In production, fetch from your actual API
|
||||
return {
|
||||
nodes: [
|
||||
{
|
||||
id: 'digital-garden',
|
||||
title: 'Digital Garden',
|
||||
tags: ['knowledge-management', 'notes'],
|
||||
category: 'concept'
|
||||
},
|
||||
// More nodes here
|
||||
],
|
||||
links: [
|
||||
{ source: 'digital-garden', target: 'obsidian' },
|
||||
// More links here
|
||||
]
|
||||
};
|
||||
}
|
||||
|
||||
// Create the visualization
|
||||
fetchQuartzData().then(graphData => {
|
||||
// Create nodes and connections
|
||||
// (implementation details omitted for brevity)
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Tips for a Successful Digital Garden
|
||||
|
||||
After maintaining my digital garden for over a year, here are my top tips:
|
||||
|
||||
1. **Start small** - Begin with a few well-connected notes rather than trying to publish everything at once.
|
||||
|
||||
2. **Focus on connections** - The value is in the links between notes, not just the notes themselves.
|
||||
|
||||
3. **Embrace imperfection** - Digital gardens are meant to grow and evolve; they're never "finished."
|
||||
|
||||
4. **Build in public** - Share your process and learnings as you go.
|
||||
|
||||
5. **Use consistent formatting** - Makes it easier for readers to navigate your garden.
|
||||
|
||||
## The Impact of My Digital Garden
|
||||
|
||||
Since starting my digital garden, I've experienced several unexpected benefits:
|
||||
|
||||
- **Clearer thinking** - Writing for an audience forces me to clarify my thoughts.
|
||||
- **Unexpected connections** - I've discovered relationships between ideas I hadn't noticed before.
|
||||
- **Community building** - I've connected with others interested in the same topics.
|
||||
- **Learning accountability** - Publishing regularly motivates me to keep learning.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
Building a digital garden with Quartz, Obsidian, and Astro has transformed how I learn and share knowledge. It's become so much more than a blog - it's a living representation of my thinking that grows more valuable with each connection I make.
|
||||
|
||||
If you're considering starting your own digital garden, I hope this guide gives you a solid foundation. Remember, the best garden is the one you actually tend to, so start simple and let it grow naturally over time.
|
||||
|
||||
What about you? Are you maintaining a digital garden or thinking about starting one? I'd love to hear about your experience in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on February 10, 2024 with information about the latest Quartz configuration options and integration with Astro._
|
|
@ -0,0 +1,328 @@
|
|||
---
|
||||
title: "Managing Kubernetes with Rancher: The Home Lab Way"
|
||||
description: How to set up, configure, and get the most out of Rancher for managing your home Kubernetes clusters
|
||||
pubDate: 2025-04-19
|
||||
updatedDate: 2025-04-18
|
||||
category: Services
|
||||
tags:
|
||||
- rancher
|
||||
- kubernetes
|
||||
- k3s
|
||||
- devops
|
||||
- containers
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
I've been running Kubernetes at home for years now, and I've tried just about every management tool out there. From kubectl and a bunch of YAML files to various dashboards and UIs, I've experimented with it all. But the one tool that's been a constant in my home lab journey is [Rancher](https://rancher.com/) - a complete container management platform that makes Kubernetes management almost... dare I say it... enjoyable?
|
||||
|
||||
Today, I want to walk you through setting up Rancher in your home lab and show you some of the features that have made it indispensable for me.
|
||||
|
||||
## What is Rancher and Why Should You Care?
|
||||
|
||||
Rancher is an open-source platform for managing Kubernetes clusters. Think of it as mission control for all your container workloads. It gives you:
|
||||
|
||||
- A unified interface for managing multiple clusters (perfect if you're running different K8s distros)
|
||||
- Simplified deployment of applications via apps & marketplace
|
||||
- Built-in monitoring, logging, and alerting
|
||||
- User management and role-based access control
|
||||
- A clean, intuitive UI that's actually useful (rare in the Kubernetes world!)
|
||||
|
||||
If you're running even a single Kubernetes cluster at home, Rancher can save you countless hours of typing `kubectl` commands and editing YAML files by hand.
|
||||
|
||||
## Setting Up Rancher in Your Home Lab
|
||||
|
||||
There are several ways to deploy Rancher, but I'll focus on two approaches that work well for home labs.
|
||||
|
||||
### Option 1: Docker Deployment (Quickstart)
|
||||
|
||||
The fastest way to get up and running is with Docker:
|
||||
|
||||
```bash
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
That's it! Navigate to `https://your-server-ip` and you'll be prompted to set a password and server URL.
|
||||
|
||||
But while this method is quick, I prefer the next approach for a more production-like setup.
|
||||
|
||||
### Option 2: Installing Rancher on K3s
|
||||
|
||||
My preferred method is to run Rancher on a lightweight Kubernetes distribution like K3s. This gives you better reliability and easier upgrades.
|
||||
|
||||
First, install K3s:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
Next, install cert-manager (required for Rancher to manage certificates):
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.12.2/cert-manager.yaml
|
||||
```
|
||||
|
||||
Then, install Rancher using Helm:
|
||||
|
||||
```bash
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
helm repo update
|
||||
|
||||
kubectl create namespace cattle-system
|
||||
|
||||
helm install rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.yourdomain.com \
|
||||
--set bootstrapPassword=admin
|
||||
```
|
||||
|
||||
Depending on your home lab setup, you might want to use a load balancer or ingress controller. I use Traefik, which comes pre-installed with K3s:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: rancher
|
||||
namespace: cattle-system
|
||||
spec:
|
||||
rules:
|
||||
- host: rancher.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: rancher
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- rancher.yourdomain.com
|
||||
secretName: rancher-tls
|
||||
```
|
||||
|
||||
## Importing Your Existing Clusters
|
||||
|
||||
Once Rancher is running, you can import your existing Kubernetes clusters. This is my favorite part because it doesn't require you to rebuild anything.
|
||||
|
||||
1. In the Rancher UI, go to "Cluster Management"
|
||||
2. Click "Import Existing"
|
||||
3. Choose a name for your cluster
|
||||
4. Copy the provided kubectl command and run it on your existing cluster
|
||||
|
||||
Rancher will install its agent on your cluster and begin managing it. Magic!
|
||||
|
||||
## Setting Up Monitoring
|
||||
|
||||
Rancher makes it dead simple to deploy Prometheus and Grafana for monitoring:
|
||||
|
||||
1. From your cluster's dashboard, go to "Apps"
|
||||
2. Select "Monitoring" from the Charts
|
||||
3. Install with default settings (or customize as needed)
|
||||
|
||||
In minutes, you'll have a full monitoring stack with pre-configured dashboards for nodes, pods, workloads, and more.
|
||||
|
||||
Here's what my Grafana dashboard looks like for my home K8s cluster:
|
||||
|
||||

|
||||
|
||||
## Creating Deployments Through the UI
|
||||
|
||||
While I'm a big fan of GitOps and declarative deployments, sometimes you just want to quickly spin up a container without writing YAML. Rancher makes this painless:
|
||||
|
||||
1. Go to your cluster
|
||||
2. Select "Workload > Deployments"
|
||||
3. Click "Create"
|
||||
4. Fill in the form with your container details
|
||||
|
||||
You get a nice UI for setting environment variables, volumes, health checks, and more. Once you're happy with it, Rancher generates and applies the YAML behind the scenes.
|
||||
|
||||
## Rancher Fleet for GitOps
|
||||
|
||||
One of the newer features I love is Fleet, Rancher's GitOps engine. It allows you to manage deployments across clusters using Git repositories:
|
||||
|
||||
```yaml
|
||||
# Example fleet.yaml
|
||||
defaultNamespace: monitoring
|
||||
helm:
|
||||
releaseName: prometheus
|
||||
repo: https://prometheus-community.github.io/helm-charts
|
||||
chart: kube-prometheus-stack
|
||||
version: 39.4.0
|
||||
values:
|
||||
grafana:
|
||||
adminPassword: ${GRAFANA_PASSWORD}
|
||||
targets:
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
environment: production
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
environment: development
|
||||
helm:
|
||||
values:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1Gi
|
||||
requests:
|
||||
memory: 512Mi
|
||||
```
|
||||
|
||||
With Fleet, I maintain a Git repository with all my deployments, and Rancher automatically applies them to the appropriate clusters. When I push changes, they're automatically deployed - proper GitOps!
|
||||
|
||||
## Rancher for Projects and Teams
|
||||
|
||||
If you're working with a team or want to compartmentalize your applications, Rancher's projects feature is fantastic:
|
||||
|
||||
1. Create different projects within a cluster (e.g., "Media," "Home Automation," "Development")
|
||||
2. Assign namespaces to projects
|
||||
3. Set resource quotas for each project
|
||||
4. Create users and assign them to projects with specific permissions
|
||||
|
||||
This way, you can give friends or family members access to specific applications without worrying about them breaking your critical services.
|
||||
|
||||
## Advanced: Custom Cluster Templates
|
||||
|
||||
As my home lab grew, I started using Rancher's cluster templates to ensure consistency across my Kubernetes installations:
|
||||
|
||||
```yaml
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: ClusterTemplate
|
||||
metadata:
|
||||
name: homelab-standard
|
||||
spec:
|
||||
displayName: HomeStack Standard
|
||||
revisionName: homelab-standard-v1
|
||||
members:
|
||||
- accessType: owner
|
||||
userPrincipalName: user-abc123
|
||||
template:
|
||||
spec:
|
||||
rancherKubernetesEngineConfig:
|
||||
services:
|
||||
etcd:
|
||||
backupConfig:
|
||||
enabled: true
|
||||
intervalHours: 12
|
||||
retention: 6
|
||||
kubeApi:
|
||||
auditLog:
|
||||
enabled: true
|
||||
network:
|
||||
plugin: canal
|
||||
monitoring:
|
||||
provider: metrics-server
|
||||
addons: |-
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cert-manager
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress-nginx
|
||||
```
|
||||
|
||||
## My Top Rancher Tips
|
||||
|
||||
After years of using Rancher, here are my top tips:
|
||||
|
||||
1. **Use the Rancher CLI**: For repetitive tasks, the CLI is faster than the UI:
|
||||
```bash
|
||||
rancher login https://rancher.yourdomain.com --token token-abc123
|
||||
rancher kubectl get nodes
|
||||
```
|
||||
|
||||
2. **Set Up External Authentication**: Connect Rancher to your identity provider (I use GitHub):
|
||||
```yaml
|
||||
# Sample GitHub auth config
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: AuthConfig
|
||||
metadata:
|
||||
name: github
|
||||
type: githubConfig
|
||||
properties:
|
||||
enabled: true
|
||||
clientId: your-github-client-id
|
||||
clientSecret: your-github-client-secret
|
||||
allowedPrincipals:
|
||||
- github_user://your-github-username
|
||||
- github_org://your-github-org
|
||||
```
|
||||
|
||||
3. **Create Node Templates**: If you're using RKE, save node templates for quick cluster expansion.
|
||||
|
||||
4. **Use App Templates**: Save your common applications as templates for quick deployment.
|
||||
|
||||
5. **Set Up Alerts**: Configure alerts for node health, pod failures, and resource constraints.
|
||||
|
||||
## Dealing with Common Rancher Issues
|
||||
|
||||
Even the best tools have their quirks. Here are some issues I've encountered and how I solved them:
|
||||
|
||||
### Issue: Rancher UI Becomes Slow
|
||||
|
||||
If your Rancher UI starts lagging, check your browser's local storage. The Rancher UI caches a lot of data, which can build up over time:
|
||||
|
||||
```javascript
|
||||
// Run this in your browser console while on the Rancher page
|
||||
localStorage.clear()
|
||||
```
|
||||
|
||||
### Issue: Certificate Errors After DNS Changes
|
||||
|
||||
If you change your domain or DNS settings, Rancher certificates might need to be regenerated:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system delete secret tls-rancher-ingress
|
||||
kubectl -n cattle-system delete secret tls-ca
|
||||
```
|
||||
|
||||
Then restart the Rancher pods:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system rollout restart deploy/rancher
|
||||
```
|
||||
|
||||
### Issue: Stuck Cluster Imports
|
||||
|
||||
If a cluster gets stuck during import, clean up the agent resources and try again:
|
||||
|
||||
```bash
|
||||
kubectl delete clusterrole cattle-admin cluster-owner
|
||||
kubectl delete clusterrolebinding cattle-admin-binding cluster-owner
|
||||
kubectl delete namespace cattle-system
|
||||
```
|
||||
|
||||
## The Future of Rancher
|
||||
|
||||
With SUSE's acquisition of Rancher Labs, the future looks bright. The latest Rancher updates have added:
|
||||
|
||||
- Better integration with cloud providers
|
||||
- Improved security features
|
||||
- Enhanced multi-cluster management
|
||||
- Lower resource requirements (great for home labs)
|
||||
|
||||
My wish list for future versions includes:
|
||||
|
||||
- Native GitOps for everything (not just workloads)
|
||||
- Better templating for one-click deployments
|
||||
- More pre-configured monitoring dashboards
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
Rancher has transformed how I manage my home Kubernetes clusters. What used to be a complex, time-consuming task is now almost... fun? If you're running Kubernetes at home and haven't tried Rancher yet, you're missing out on one of the best tools in the Kubernetes ecosystem.
|
||||
|
||||
Sure, you could manage everything with kubectl and YAML files (and I still do that sometimes), but having a well-designed UI for management, monitoring, and troubleshooting saves countless hours and reduces the learning curve for those just getting started with Kubernetes.
|
||||
|
||||
Are you using Rancher or another tool to manage your Kubernetes clusters? What's been your experience? Let me know in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on March 5, 2024 with information about Rancher v2.7 features and Fleet GitOps capabilities._
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Starting My Digital Garden
|
||||
pubDate: 2025-04-19
|
||||
tags:
|
||||
- blog
|
||||
- meta
|
||||
- digital-garden
|
||||
draft: false
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
# Starting My Digital Garden
|
||||
|
||||
## Introduction
|
||||
|
||||
Today I'm launching my public digital garden - a space where I'll share my thoughts, ideas, and projects with the world.
|
||||
|
||||
## What to Expect
|
||||
|
||||
This site contains two main sections:
|
||||
|
||||
1. **Journal** - Daily notes and thoughts, more raw and in-the-moment
|
||||
2. **Blog** - More structured and polished articles on various topics
|
||||
|
||||
I'll be using my daily journal to capture ideas as they happen, and some of these will evolve into more detailed blog posts over time.
|
||||
|
||||
## Why a Digital Garden?
|
||||
|
||||
Unlike traditional blogs that are often published, then forgotten, a digital garden is meant to grow and evolve over time. I'll be revisiting and updating content as my thoughts and understanding develop.
|
||||
|
||||
## Topics I'll Cover
|
||||
|
||||
- Technology projects I'm working on
|
||||
- Learning notes and discoveries
|
||||
- Workflow and productivity systems
|
||||
- Occasional personal reflections
|
||||
|
||||
## Stay Connected
|
||||
|
||||
Feel free to check back regularly to see what's growing in this garden. The journal section will be updated most frequently, while blog posts will appear when ideas have had time to mature.
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
title: Test Post
|
||||
pubDate: 2024-03-20
|
||||
description: This is a test post to verify the blog setup
|
||||
category: Test
|
||||
tags:
|
||||
- test
|
||||
draft: true
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
# Test Post
|
||||
|
||||
This is a test post to verify that the blog setup is working correctly.
|
|
@ -0,0 +1,517 @@
|
|||
---
|
||||
title: Setting Up VS Code Server for Remote Development Anywhere
|
||||
description: How to set up and configure VS Code Server for seamless remote development from any device
|
||||
pubDate: 2023-04-18
|
||||
updatedDate: 2024-04-19
|
||||
category: Services
|
||||
tags:
|
||||
- vscode
|
||||
- remote-development
|
||||
- self-hosted
|
||||
- coding
|
||||
- homelab
|
||||
heroImage: /blog/images/posts/prometheusk8.png
|
||||
---
|
||||
|
||||
If you're like me, you probably find yourself coding on multiple devices - maybe a desktop at home, a laptop when traveling, or even occasionally on a tablet. For years, keeping development environments in sync was a pain point. Enter [VS Code Server](https://code.visualstudio.com/docs/remote/vscode-server), the solution that has completely transformed my development workflow.
|
||||
|
||||
Today, I want to show you how to set up your own self-hosted VS Code Server that lets you code from literally any device with a web browser, all while using your powerful home server for the heavy lifting.
|
||||
|
||||
## Why VS Code Server?
|
||||
|
||||
Before we dive into the setup, let's talk about why you might want this:
|
||||
|
||||
- **Consistent environment**: The same development setup, extensions, and configurations regardless of which device you're using.
|
||||
- **Resource optimization**: Run resource-intensive tasks (builds, tests) on your powerful server instead of your laptop.
|
||||
- **Work from anywhere**: Access your development environment from any device with a browser, even an iPad or a borrowed computer.
|
||||
- **Seamless switching**: Start working on one device and continue on another without missing a beat.
|
||||
|
||||
I've been using this setup for months now, and it's been a game-changer for my productivity. Let's get into the setup!
|
||||
|
||||
## Setting Up VS Code Server
|
||||
|
||||
There are a few ways to run VS Code Server. I'll cover the official method and my preferred Docker approach.
|
||||
|
||||
### Option 1: Official CLI Installation
|
||||
|
||||
The VS Code team provides a CLI for setting up the server:
|
||||
|
||||
```bash
|
||||
# Download and install the CLI
|
||||
curl -fsSL https://code.visualstudio.com/sha/download?build=stable&os=cli-alpine-x64 -o vscode_cli.tar.gz
|
||||
tar -xf vscode_cli.tar.gz
|
||||
sudo mv code /usr/local/bin/
|
||||
|
||||
# Start the server
|
||||
code serve-web --accept-server-license-terms --host 0.0.0.0
|
||||
```
|
||||
|
||||
This method is straightforward but requires you to manage the process yourself.
|
||||
|
||||
### Option 2: Docker Installation (My Preference)
|
||||
|
||||
I prefer using Docker for easier updates and management. Here's my `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
code-server:
|
||||
image: linuxserver/code-server:latest
|
||||
container_name: code-server
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
- PASSWORD=your_secure_password # Consider using Docker secrets instead
|
||||
- SUDO_PASSWORD=your_sudo_password # Optional
|
||||
- PROXY_DOMAIN=code.yourdomain.com # Optional
|
||||
volumes:
|
||||
- ./config:/config
|
||||
- /path/to/your/projects:/projects
|
||||
- /path/to/your/home:/home/coder
|
||||
ports:
|
||||
- 8443:8443
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
Run it with:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Your VS Code Server will be available at `https://your-server-ip:8443`.
|
||||
|
||||
### Option 3: Kubernetes Deployment
|
||||
|
||||
For those running Kubernetes (like me), here's a YAML manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: vscode-server
|
||||
namespace: development
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: vscode-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: vscode-server
|
||||
spec:
|
||||
containers:
|
||||
- name: vscode-server
|
||||
image: linuxserver/code-server:latest
|
||||
env:
|
||||
- name: PUID
|
||||
value: "1000"
|
||||
- name: PGID
|
||||
value: "1000"
|
||||
- name: TZ
|
||||
value: "America/Los_Angeles"
|
||||
- name: PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: vscode-secrets
|
||||
key: password
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
- name: projects
|
||||
mountPath: /projects
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: vscode-config
|
||||
- name: projects
|
||||
persistentVolumeClaim:
|
||||
claimName: vscode-projects
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: vscode-server
|
||||
namespace: development
|
||||
spec:
|
||||
selector:
|
||||
app: vscode-server
|
||||
ports:
|
||||
- port: 8443
|
||||
targetPort: 8443
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: vscode-server
|
||||
namespace: development
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: nginx
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- code.yourdomain.com
|
||||
secretName: vscode-tls
|
||||
rules:
|
||||
- host: code.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: vscode-server
|
||||
port:
|
||||
number: 8443
|
||||
```
|
||||
|
||||
## Accessing VS Code Server Securely
|
||||
|
||||
You don't want to expose your development environment directly to the internet without proper security. Here are my recommendations:
|
||||
|
||||
### 1. Use a Reverse Proxy with SSL
|
||||
|
||||
I use Traefik as a reverse proxy with automatic SSL certificate generation:
|
||||
|
||||
```yaml
|
||||
# traefik.yml dynamic config
|
||||
http:
|
||||
routers:
|
||||
vscode:
|
||||
rule: "Host(`code.yourdomain.com`)"
|
||||
service: "vscode"
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
services:
|
||||
vscode:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://localhost:8443"
|
||||
```
|
||||
|
||||
### 2. Set Up Authentication
|
||||
|
||||
The LinuxServer image already includes basic authentication, but you can add another layer with something like Authelia:
|
||||
|
||||
```yaml
|
||||
# authelia configuration.yml
|
||||
access_control:
|
||||
default_policy: deny
|
||||
rules:
|
||||
- domain: code.yourdomain.com
|
||||
policy: two_factor
|
||||
subject: "group:developers"
|
||||
```
|
||||
|
||||
### 3. Use Cloudflare Tunnel
|
||||
|
||||
For ultimate security, I use a Cloudflare Tunnel to avoid exposing any ports:
|
||||
|
||||
```yaml
|
||||
# cloudflared config.yml
|
||||
tunnel: your-tunnel-id
|
||||
credentials-file: /etc/cloudflared/creds.json
|
||||
|
||||
ingress:
|
||||
- hostname: code.yourdomain.com
|
||||
service: http://localhost:8443
|
||||
originRequest:
|
||||
noTLSVerify: true
|
||||
- service: http_status:404
|
||||
```
|
||||
|
||||
## Configuring Your VS Code Server Environment
|
||||
|
||||
Once your server is running, it's time to set it up for optimal productivity:
|
||||
|
||||
### 1. Install Essential Extensions
|
||||
|
||||
Here are the must-have extensions I install first:
|
||||
|
||||
```bash
|
||||
# From the VS Code terminal
|
||||
code-server --install-extension ms-python.python
|
||||
code-server --install-extension ms-azuretools.vscode-docker
|
||||
code-server --install-extension dbaeumer.vscode-eslint
|
||||
code-server --install-extension esbenp.prettier-vscode
|
||||
code-server --install-extension github.copilot
|
||||
code-server --install-extension golang.go
|
||||
```
|
||||
|
||||
Or you can install them through the Extensions marketplace in the UI.
|
||||
|
||||
### 2. Configure Settings Sync
|
||||
|
||||
To keep your settings in sync between instances:
|
||||
|
||||
1. Open the Command Palette (Ctrl+Shift+P)
|
||||
2. Search for "Settings Sync: Turn On"
|
||||
3. Sign in with your GitHub or Microsoft account
|
||||
|
||||
### 3. Set Up Git Authentication
|
||||
|
||||
For seamless Git operations:
|
||||
|
||||
```bash
|
||||
# Generate a new SSH key if needed
|
||||
ssh-keygen -t ed25519 -C "your_email@example.com"
|
||||
|
||||
# Add to your GitHub/GitLab account
|
||||
cat ~/.ssh/id_ed25519.pub
|
||||
|
||||
# Configure Git
|
||||
git config --global user.name "Your Name"
|
||||
git config --global user.email "your_email@example.com"
|
||||
```
|
||||
|
||||
## Power User Features
|
||||
|
||||
Now let's look at some advanced configurations that make VS Code Server even more powerful:
|
||||
|
||||
### 1. Workspace Launcher
|
||||
|
||||
I created a simple HTML page that lists all my projects for quick access:
|
||||
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>LaForceIT Workspace Launcher</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
background-color: #1e1e1e;
|
||||
color: #d4d4d4;
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
.project-list {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
|
||||
gap: 15px;
|
||||
margin-top: 20px;
|
||||
}
|
||||
.project-card {
|
||||
background-color: #252526;
|
||||
border-radius: 5px;
|
||||
padding: 15px;
|
||||
transition: transform 0.2s;
|
||||
}
|
||||
.project-card:hover {
|
||||
transform: translateY(-5px);
|
||||
background-color: #2d2d2d;
|
||||
}
|
||||
a {
|
||||
color: #569cd6;
|
||||
text-decoration: none;
|
||||
}
|
||||
h1 { color: #569cd6; }
|
||||
h2 { color: #4ec9b0; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>LaForceIT Workspace Launcher</h1>
|
||||
<div class="project-list">
|
||||
<div class="project-card">
|
||||
<h2>ArgoBox</h2>
|
||||
<p>My Kubernetes home lab platform</p>
|
||||
<a href="/projects/argobox">Open Workspace</a>
|
||||
</div>
|
||||
<div class="project-card">
|
||||
<h2>Blog</h2>
|
||||
<p>LaForceIT blog and digital garden</p>
|
||||
<a href="/projects/blog">Open Workspace</a>
|
||||
</div>
|
||||
<!-- Add more projects as needed -->
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### 2. Custom Terminal Profile
|
||||
|
||||
Add this to your `settings.json` for a better terminal experience:
|
||||
|
||||
```json
|
||||
"terminal.integrated.profiles.linux": {
|
||||
"bash": {
|
||||
"path": "bash",
|
||||
"icon": "terminal-bash",
|
||||
"args": ["-l"]
|
||||
},
|
||||
"zsh": {
|
||||
"path": "zsh"
|
||||
},
|
||||
"customProfile": {
|
||||
"path": "bash",
|
||||
"args": ["-c", "neofetch && bash -l"],
|
||||
"icon": "terminal-bash",
|
||||
"overrideName": true
|
||||
}
|
||||
},
|
||||
"terminal.integrated.defaultProfile.linux": "customProfile"
|
||||
```
|
||||
|
||||
### 3. Persistent Development Containers
|
||||
|
||||
I use Docker-in-Docker to enable VS Code Dev Containers:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
code-server:
|
||||
# ... other config from above
|
||||
volumes:
|
||||
# ... other volumes
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
# ... other env vars
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
```
|
||||
|
||||
## Real-World Examples: How I Use VS Code Server
|
||||
|
||||
Let me share a few real-world examples of how I use this setup:
|
||||
|
||||
### Example 1: Coding on iPad During Travel
|
||||
|
||||
When traveling with just my iPad, I connect to my VS Code Server to work on my projects. With a Bluetooth keyboard and the amazing iPad screen, it's a surprisingly good experience. The heavy compilation happens on my server back home, so battery life on the iPad remains excellent.
|
||||
|
||||
### Example 2: Pair Programming Sessions
|
||||
|
||||
When helping friends debug issues, I can generate a temporary access link to my VS Code Server:
|
||||
|
||||
```bash
|
||||
# Create a time-limited token
|
||||
TEMP_TOKEN=$(openssl rand -hex 16)
|
||||
echo "Token: $TEMP_TOKEN" > /tmp/temp_token
|
||||
curl -X POST -H "Content-Type: application/json" \
|
||||
-d "{\"token\":\"$TEMP_TOKEN\", \"expiresIn\":\"2h\"}" \
|
||||
http://localhost:8443/api/auth/temporary
|
||||
```
|
||||
|
||||
### Example 3: Switching Between Devices
|
||||
|
||||
I often start coding on my desktop, then switch to my laptop when moving to another room. With VS Code Server, I just close the browser on one device and open it on another - my entire session, including unsaved changes and terminal state, remains intact.
|
||||
|
||||
## Monitoring and Maintaining Your VS Code Server
|
||||
|
||||
To keep your server running smoothly:
|
||||
|
||||
### 1. Set Up Health Checks
|
||||
|
||||
I use Uptime Kuma to monitor my VS Code Server:
|
||||
|
||||
```yaml
|
||||
# Docker Compose snippet for Uptime Kuma
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma:latest
|
||||
container_name: uptime-kuma
|
||||
volumes:
|
||||
- ./uptime-kuma:/app/data
|
||||
ports:
|
||||
- 3001:3001
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
Add a monitor that checks `https://code.yourdomain.com/healthz` endpoint.
|
||||
|
||||
### 2. Regular Backups
|
||||
|
||||
Set up a cron job to back up your VS Code Server configuration:
|
||||
|
||||
```bash
|
||||
# /etc/cron.daily/backup-vscode-server
|
||||
#!/bin/bash
|
||||
tar -czf /backups/vscode-server-$(date +%Y%m%d).tar.gz /path/to/config
|
||||
```
|
||||
|
||||
### 3. Update Script
|
||||
|
||||
Create a script for easy updates:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# update-vscode-server.sh
|
||||
cd /path/to/docker-compose
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
docker system prune -f
|
||||
```
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
Here are solutions to some issues I've encountered:
|
||||
|
||||
### Issue: Extensions Not Installing
|
||||
|
||||
If extensions fail to install, try:
|
||||
|
||||
```bash
|
||||
# Clear the extensions cache
|
||||
rm -rf ~/.vscode-server/extensions/*
|
||||
|
||||
# Restart the server
|
||||
docker-compose restart code-server
|
||||
```
|
||||
|
||||
### Issue: Performance Problems
|
||||
|
||||
If you're experiencing lag:
|
||||
|
||||
```bash
|
||||
# Adjust memory limits in docker-compose.yml
|
||||
services:
|
||||
code-server:
|
||||
# ... other config
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 4G
|
||||
reservations:
|
||||
memory: 1G
|
||||
```
|
||||
|
||||
### Issue: Git Integration Not Working
|
||||
|
||||
For Git authentication issues:
|
||||
|
||||
```bash
|
||||
# Make sure your SSH key is properly set up
|
||||
eval "$(ssh-agent -s)"
|
||||
ssh-add ~/.ssh/id_ed25519
|
||||
|
||||
# Test your connection
|
||||
ssh -T git@github.com
|
||||
```
|
||||
|
||||
## Why I Prefer Self-Hosted Over GitHub Codespaces
|
||||
|
||||
People often ask why I don't just use GitHub Codespaces. Here's my take:
|
||||
|
||||
1. **Cost**: Self-hosted is essentially free (if you already have a server).
|
||||
2. **Privacy**: All my code remains on my hardware.
|
||||
3. **Customization**: Complete control over the environment.
|
||||
4. **Performance**: My server has 64GB RAM and a 12-core CPU - better than most cloud options.
|
||||
5. **Availability**: Works even when GitHub is down or when I have limited internet.
|
||||
|
||||
## Wrapping Up
|
||||
|
||||
VS Code Server has truly changed how I approach development. The ability to have a consistent, powerful environment available from any device has increased my productivity and eliminated the friction of context-switching between machines.
|
||||
|
||||
Whether you're a solo developer or part of a team, having a centralized development environment that's accessible from anywhere is incredibly powerful. And the best part? It uses the familiar VS Code interface that millions of developers already know and love.
|
||||
|
||||
Have you tried VS Code Server or similar remote development solutions? I'd love to hear about your setup and experiences in the comments!
|
||||
|
||||
---
|
||||
|
||||
_This post was last updated on March 10, 2024 with information about the latest VS Code Server features and Docker image updates._
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: "Projects Collection"
|
||||
description: "A placeholder document for the projects collection"
|
||||
heroImage: "/blog/images/placeholders/default.jpg"
|
||||
pubDate: 2025-04-18
|
||||
status: "planning"
|
||||
tech: ["astro", "markdown"]
|
||||
---
|
||||
|
||||
# Projects Collection
|
||||
|
||||
This is a placeholder file for the projects collection.
|