Another thing you can do is to have recordings by default go to the local drive, then have an overnight (4am?) script run that moves the oldest recordings to the NAS.
I've been using this script for a few years now with this kind of setup. I stole it years ago from this Lowrenthuis and have been using it with only minor modification since.
You just set a target amount of free disk space (the amount + safety margin you think a days worth of recordings will never exceed) on the local disk, and it will dutifully move the oldest recordings to the mount location defined (your NAS) until it reaches the target free disk space.
Mine is called moveoldfiles.sh
The syntax is "moveoldfiles.sh <mountpoint where files are stored> <target percent diskspace free>", so in my case "moveoldfiles.sh /mnt/scheduled 86"
Then you just edit the one line in the end of the script to tell it what to do with the identified files, and give it a location to move the files to.
Code: Select all
#!/bin/bash
#
###############################################################################
# Author : Louwrentius
# Contact : louwrentius@gmail.com
# Initial release : August 2011
# Licence : Simplified BSD License
###############################################################################
VERSION=1.01
#
# Mounted volume to be monitored.
#
MOUNT="$1"
#
# Maximum threshold of volume used as an integer that represents a percentage:
# 95 = 95%.
#
MAX_USAGE="$2"
#
# Failsafe mechansim. Delete a maxium of MAX_CYCLES files, raise an error after
# that. Prevents possible runaway script. Disable by choosing a high value.
#
MAX_CYCLES=1000
show_header () {
echo
echo MOVE OLD FILES $VERSION
echo
}
show_header
reset () {
CYCLES=0
OLDEST_FILE=""
OLDEST_DATE=0
ARCH=`uname`
}
reset
if [ -z "$MOUNT" ] || [ ! -e "$MOUNT" ] || [ ! -d "$MOUNT" ] || [ -z "$MAX_USAGE" ]
then
echo "Usage: $0 <mountpoint> <threshold>"
echo "Where threshold is a percentage."
echo
echo "Example: $0 /storage 90"
echo "If disk usage of /storage exceeds 90% the oldest"
echo "file(s) will be moved until usage is below 90%."
echo
echo "Wrong command line arguments or another error:"
echo
echo "- Directory not provided as argument or"
echo "- Directory does not exist or"
echo "- Argument is not a directory or"
echo "- no/wrong percentage supplied as argument."
echo
exit 1
fi
check_capacity () {
USAGE=`df -h | grep "$MOUNT" | awk '{ print $5 }' | sed s/%//g`
if [ ! "$?" == "0" ]
then
echo "Error: mountpoint $MOUNT not found in df output."
exit 1
fi
if [ -z "$USAGE" ]
then
echo "Didn't get usage information of $MOUNT"
echo "Mountpoint does not exist or please remove trailing slash."
exit 1
fi
if [ "$USAGE" -gt "$MAX_USAGE" ]
then
echo "Usage of $USAGE% exceeded limit of $MAX_USAGE percent."
return 0
else
echo "Usage of $USAGE% is within limit of $MAX_USAGE percent."
return 1
fi
}
check_age () {
FILE="$1"
if [ "$ARCH" == "Linux" ]
then
FILE_DATE=`stat -c %Y "$FILE"`
elif [ "$ARCH" == "Darwin" ]
then
FILE_DATE=`stat -f %Sm -t %s "$FILE"`
else
echo "Error: unsupported architecture."
echo "Send a patch for the correct stat arguments for your architecture."
fi
NOW=`date +%s`
AGE=$((NOW-FILE_DATE))
if [ "$AGE" -gt "$OLDEST_DATE" ]
then
export OLDEST_DATE="$AGE"
export OLDEST_FILE="$FILE"
fi
}
process_file () {
FILE="$1"
#
# Replace the following commands with wathever you want to do with
# this file. You can delete files but also move files or do something else.
#
echo "Moving oldest file $FILE"
mv -f "$FILE" /mnt/archive/scheduled/
}
while check_capacity
do
if [ "$CYCLES" -gt "$MAX_CYCLES" ]
then
echo "Error: after $MAX_CYCLES moved files still not enough free space."
exit 1
fi
reset
FILES=`find "$MOUNT" -type f`
IFS=$'\n'
for x in $FILES
do
check_age "$x"
done
if [ -e "$OLDEST_FILE" ]
then
#
# Do something with file.
#
process_file "$OLDEST_FILE"
else
echo "Error: somehow, item $OLDEST_FILE disappeared."
fi
((CYCLES++))
done
echo
The great part about how MythTV is designed is that it doesn't care where the files are located, as long as they are in a folder configured in the back end. So, move the files from one configured folder to another, and the backend just finds them.
Probably wouldn't hurt to regularly run the optimize_mythdb.pl script thought just to make sure everything is cleaned up. I have that one run in cron every night at 4am right after the move operation is done.
v33 backend in 22.04 LTS w. LXDE, in LXC on server w. 16C/32T Xeon E5-2650v2, 256GB RAM. 6C & 8GB assigned to container.