使用生成器(generator)
def read_large_file(file_path, chunk_size=1024*1024, encoding='utf-8'):
with open(file_path, 'r', encoding=encoding) as file:
while True:
chunk = file.read(chunk_size)
if chunk:
yield chunk
else:
break
for chunk in read_large_file('large_file.txt'):
处理数据
逐行读取
with open('large_file.txt', 'r', encoding='utf-8') as file:
for line in file:
处理数据
使用`fileinput`模块
import fileinput
for line in fileinput.input('large_file.txt'):
处理数据
使用`read`方法
with open('large_file.txt', 'r', encoding='utf-8') as file:
while True:
chunk = file.read(1024*1024) 每次读取1MB
if not chunk:
break
处理数据
使用`readline`方法
with open('large_file.txt', 'r', encoding='utf-8') as file:
while True:
line = file.readline()
if not line:
break
处理数据
选择哪种方法取决于你的具体需求,例如是否需要保留整个文件内容、处理速度、内存限制等。通常,逐行读取或使用生成器是处理大文件的最佳实践,因为它们可以有效地减少内存消耗