HDFS (OS)
← Back to Distributed File Systems
Hadoop Distributed File System — designed for storing very large files across a cluster of commodity machines. Optimized for sequential reads of large files with high throughput. Uses a NameNode for metadata and DataNodes for storage with 3x replication by default. Not suitable for low-latency access or many small files.