Most HA implementations run on Raspberry Pi’s with either SD card or SSD as storage. This is significant because disk space may be limited, the life expectancy of these devices is compromised by the number of write cycles, and large log files and databases slow down performance because system resources is not unlimited.
One would therefore expect that all focus would be on avoiding generating data and committing it to disk, but unfortunately the opposite is true. On installation the default for the HA core and integrations is to log “info” level data, but a better approach could be to initially log errors only. Users can then expand the logging to include additional (debug) info for the specific component where/when they encounter problems. Now users must search through forums and documentation to find solutions, where available.
Also the HA database is de-normalized, wasting a lot of space in repeating the same data over and over. For example, every entry in the “states” table for a sensor reading contains a large chunk of text with the sensor configuration (e.g. unit of measure, icon, friendly name etc).
Providing mechanisms to purge the data and truncate logfiles only addresses part of the problem, because all that data was already written to disk and reduced your DWPD/TBW before it is cleaned up.
Question is what we as users can do to reduce the pain and help protect our systems.
Any thoughts and advice you can share based on your experience?
3 posts - 2 participants