大家好,我是正在实战各种AI项目的程序员晚枫。
今天聊一个每个Python开发者都必须掌握的技能——文件操作 。
一个真实的崩溃场景 去年有个学员问我:"晚枫老师,我的程序为什么跑着跑着就崩了?"
我看了一眼他的代码:
1 2 3 4 5 6 for i in range (100000 ): f = open (f'user_{i} .txt' , 'r' ) data = f.read()
问题 :打开的文件越来越多,系统资源耗尽,程序崩溃。
这就是不懂文件操作的代价。你可能觉得文件读写很简单,open()一下就行。但其实这里面有很多坑和技巧,用对了能省很多麻烦。
我总结了10种文件操作的姿势,从入门到进阶,最后一种最优雅。
姿势1:基础写法(有坑!) 简单示例 1 2 3 4 5 f = open ('data.txt' , 'r' ) content = f.read() print (content)f.close()
坑在哪? 1 2 3 4 5 6 f = open ('data.txt' , 'r' ) content = f.read() f.close()
实际案例 1 2 3 4 5 6 7 8 f = open ('sales_data.csv' , 'r' ) try : content = f.read() except Exception as e: print (f"出错了: {e} " )
结论 :这种写法在生产环境绝对不能用!
姿势2:try-finally(安全但啰嗦) 基础用法 1 2 3 4 5 6 f = open ('data.txt' , 'r' ) try : content = f.read() print (content) finally : f.close()
进阶用法:处理多种异常 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 f = open ('config.txt' , 'r' , encoding='utf-8' ) try : content = f.read() data = json.loads(content) except UnicodeDecodeError: print ("编码错误,尝试其他编码..." ) f.close() f = open ('config.txt' , 'r' , encoding='gbk' ) content = f.read() except json.JSONDecodeError as e: print (f"JSON格式错误: {e} " ) finally : if not f.closed: f.close()
优点 :无论是否出错,都会关闭文件缺点 :代码太长了,容易写错
姿势3:with语句(推荐!) 基础用法 1 2 3 4 with open ('data.txt' , 'r' ) as f: content = f.read() print (content)
为什么推荐? 1 2 3 4 5 6 7 8 9 10 f = open ('data.txt' , 'r' ) try : content = f.read() print (content) finally : f.close()
同时操作多个文件 1 2 3 4 5 6 7 8 9 with open ('input.txt' , 'r' ) as fin, open ('output.txt' , 'w' ) as fout: for line in fin: processed = line.upper() fout.write(processed) with open ('source.jpg' , 'rb' ) as src, open ('dest.jpg' , 'wb' ) as dst: dst.write(src.read())
自定义上下文管理器 1 2 3 4 5 6 7 8 9 10 11 12 13 from contextlib import contextmanager@contextmanager def open_file (filename, mode='r' ): f = open (filename, mode) try : yield f finally : f.close() with open_file('data.txt' ) as f: content = f.read()
这是最推荐的方式 ,简洁又安全。
姿势4:逐行读取(大文件必备) 方式1:直接遍历(推荐) 1 2 3 4 5 6 7 with open ('big_file.txt' , 'r' , encoding='utf-8' ) as f: for line in f: cleaned = line.strip() if cleaned: process(cleaned)
方式2:readlines()(小文件可以用) 1 2 3 4 5 with open ('data.txt' , 'r' ) as f: lines = f.readlines() for line in lines: print (line.strip())
性能对比 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 import timeimport sysfilename = 'large_data.txt' def read_line_by_line (): total = 0 with open (filename, 'r' ) as f: for line in f: total += len (line) return total def read_all_lines (): with open (filename, 'r' ) as f: lines = f.readlines() total = sum (len (line) for line in lines) return total def read_all (): with open (filename, 'r' ) as f: content = f.read() total = len (content) return total print ("方式1(逐行遍历):" )start = time.time() result = read_line_by_line() print (f" 结果: {result} , 用时: {time.time()-start:.2 f} s" )print (f" 内存: 几乎不增长" )print ("\n方式2(readlines):" )start = time.time() result = read_all_lines() print (f" 结果: {result} , 用时: {time.time()-start:.2 f} s" )print (f" 内存: 增加约200MB(列表开销)" )print ("\n方式3(read):" )start = time.time() result = read_all() print (f" 结果: {result} , 用时: {time.time()-start:.2 f} s" )print (f" 内存: 增加约100MB" )
处理超大日志文件实战 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 def analyze_log (filename, target_date ): """分析特定日期的日志""" error_count = 0 warning_count = 0 with open (filename, 'r' , encoding='utf-8' ) as f: for line in f: if target_date in line: if 'ERROR' in line: error_count += 1 elif 'WARNING' in line: warning_count += 1 return error_count, warning_count errors, warnings = analyze_log('app.log' , '2024-01-15' ) print (f"错误: {errors} , 警告: {warnings} " )
注意 :大文件不要用read()或readlines(),会占用大量内存。
姿势5:写入文件 覆盖写入('w'模式) 1 2 3 4 5 6 with open ('output.txt' , 'w' , encoding='utf-8' ) as f: f.write('Hello World\n' ) f.write('第二行内容\n' )
追加写入('a'模式) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 with open ('log.txt' , 'a' , encoding='utf-8' ) as f: import datetime timestamp = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S' ) f.write(f'[{timestamp} ] 新的日志记录\n' ) logs = [ '2024-01-15 10:00:00 INFO Application started\n' , '2024-01-15 10:05:00 ERROR Connection failed\n' , '2024-01-15 10:10:00 INFO Retrying...\n' ] with open ('app.log' , 'a' , encoding='utf-8' ) as f: f.writelines(logs)
写入多行 1 2 3 4 5 6 7 8 lines = ['Line 1\n' , 'Line 2\n' , 'Line 3\n' ] with open ('output.txt' , 'w' , encoding='utf-8' ) as f: f.writelines(lines) data = ['apple' , 'banana' , 'cherry' ] with open ('fruits.txt' , 'w' , encoding='utf-8' ) as f: f.writelines(f'{item} \n' for item in data)
混合读写('r+'模式) 1 2 3 4 5 with open ('data.txt' , 'r+' ) as f: content = f.read() f.seek(0 ) f.write('New data' )
性能对比:单次写入 vs 批量写入 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import timedata = [f'Line {i} \n' for i in range (100000 )] def write_line_by_line (): with open ('test1.txt' , 'w' ) as f: for line in data: f.write(line) def write_batch (): with open ('test2.txt' , 'w' ) as f: f.writelines(data) start = time.time() write_line_by_line() print (f"逐行写入: {time.time()-start:.3 f} s" )start = time.time() write_batch() print (f"批量写入: {time.time()-start:.3 f} s" )
姿势6:二进制文件操作 读取二进制文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 with open ('photo.jpg' , 'rb' ) as f: data = f.read() print (f"图片大小: {len (data)} bytes" ) def read_in_chunks (filename, chunk_size=8192 ): """分块读取大文件""" with open (filename, 'rb' ) as f: while True : chunk = f.read(chunk_size) if not chunk: break yield chunk for chunk in read_in_chunks('video.mp4' ): process(chunk)
写入二进制文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 with open ('photo.jpg' , 'rb' ) as src: with open ('photo_copy.jpg' , 'wb' ) as dst: dst.write(src.read()) def copy_file (src_path, dst_path, chunk_size=8192 ): """分块复制文件""" with open (src_path, 'rb' ) as src: with open (dst_path, 'wb' ) as dst: while True : chunk = src.read(chunk_size) if not chunk: break dst.write(chunk) print (f"复制完成: {dst_path} " ) copy_file('large_video.mp4' , 'backup_video.mp4' )
二进制文件操作实战 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 def add_watermark (image_path, output_path ): """给图片添加简单的文本水印""" with open (image_path, 'rb' ) as f: data = bytearray (f.read()) watermark = b'\n[WATERMARK] Created by Python' data.extend(watermark) with open (output_path, 'wb' ) as f: f.write(data) def get_jpeg_size (filename ): """获取JPEG图片尺寸""" with open (filename, 'rb' ) as f: magic = f.read(2 ) if magic != b'\xff\xd8' : raise ValueError("不是有效的JPEG文件" ) f.seek(0 , 2 ) size = f.tell() return size
姿势7:指定编码(中文必备) 常见编码问题 1 2 3 4 5 6 7 8 with open ('chinese.txt' , 'r' , encoding='utf-8' ) as f: content = f.read() with open ('chinese.txt' , 'r' , encoding='gbk' ) as f: content = f.read()
自动检测编码 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import chardetdef detect_encoding (filename ): """检测文件编码""" with open (filename, 'rb' ) as f: raw = f.read(10000 ) result = chardet.detect(raw) return result['encoding' ] filename = 'unknown_encoding.txt' encoding = detect_encoding(filename) print (f"检测到编码: {encoding} " )with open (filename, 'r' , encoding=encoding) as f: content = f.read()
编码转换 1 2 3 4 5 6 7 8 9 10 11 12 def convert_encoding (input_file, output_file, from_enc, to_enc='utf-8' ): """转换文件编码""" with open (input_file, 'r' , encoding=from_enc) as f: content = f.read() with open (output_file, 'w' , encoding=to_enc) as f: f.write(content) print (f"转换完成: {from_enc} -> {to_enc} " ) convert_encoding('gbk_file.txt' , 'utf8_file.txt' , 'gbk' , 'utf-8' )
常见编码类型 编码 适用场景 特点 utf-8 国际通用 推荐,兼容ASCII gbk 中文Windows 简体中文 gb2312 中文 老系统 big5 繁体中文 港台地区 latin-1 西欧语言 ISO-8859-1
避坑指南 1 2 3 4 5 6 7 8 9 10 11 12 13 with open ('file.txt' , 'w' , encoding='utf-8' ) as f: f.write('中文内容' ) with open ('file.txt' , 'r' , encoding='utf-8-sig' ) as f: content = f.read()
姿势8:使用Path(现代写法) 基础用法 1 2 3 4 5 6 7 8 9 10 11 12 13 from pathlib import Pathcontent = Path('data.txt' ).read_text(encoding='utf-8' ) Path('output.txt' ).write_text('Hello World\n' , encoding='utf-8' ) image_data = Path('photo.jpg' ).read_bytes() Path('output.jpg' ).write_bytes(image_data)
路径操作 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 from pathlib import Pathp = Path('/Users/wanfeng/projects/my_project/data.txt' ) print (p.name) print (p.stem) print (p.suffix) print (p.parent) new_path = p.with_suffix('.csv' ) new_path = p.with_name('config.json' ) new_path = p.parent / 'backup' / p.name print (p.exists()) print (p.is_file()) print (p.is_dir())
批量处理文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 from pathlib import Pathfolder = Path('./documents' ) for txt_file in folder.glob('*.txt' ): content = txt_file.read_text(encoding='utf-8' ) print (f"{txt_file.name} : {len (content)} 字符" ) for py_file in folder.rglob('*.py' ): print (py_file) Path('./new_folder/sub_folder' ).mkdir(parents=True , exist_ok=True )
Path vs open 性能对比 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import timefrom pathlib import Pathtest_file = 'small.txt' Path(test_file).write_text('test content' ) def test_open (): start = time.time() for _ in range (10000 ): with open (test_file, 'r' ) as f: content = f.read() return time.time() - start def test_path (): start = time.time() for _ in range (10000 ): content = Path(test_file).read_text() return time.time() - start print (f"传统open: {test_open():.3 f} s" )print (f"Path方式: {test_path():.3 f} s" )
优点 :
一行搞定,自动处理关闭 路径操作更优雅 跨平台兼容性好(Windows/Mac/Linux) 姿势9:CSV文件处理 使用csv模块 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import csvwith open ('data.csv' , 'r' , encoding='utf-8' ) as f: reader = csv.reader(f) for row in reader: print (row) with open ('output.csv' , 'w' , encoding='utf-8' , newline='' ) as f: writer = csv.writer(f) writer.writerow(['Name' , 'Age' , 'City' ]) writer.writerow(['Alice' , '25' , 'Beijing' ]) writer.writerow(['Bob' , '30' , 'Shanghai' ]) data = [ ['Charlie' , '35' , 'Guangzhou' ], ['David' , '28' , 'Shenzhen' ] ] writer.writerows(data)
使用DictReader/DictWriter 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import csvwith open ('data.csv' , 'r' , encoding='utf-8' ) as f: reader = csv.DictReader(f) for row in reader: print (f"{row['Name' ]} : {row['Age' ]} 岁" ) with open ('output.csv' , 'w' , encoding='utf-8' , newline='' ) as f: fieldnames = ['Name' , 'Age' , 'City' ] writer = csv.DictWriter(f, fieldnames=fieldnames) writer.writeheader() writer.writerow({'Name' : 'Alice' , 'Age' : 25 , 'City' : 'Beijing' }) writer.writerow({'Name' : 'Bob' , 'Age' : 30 , 'City' : 'Shanghai' })
处理大CSV文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import csvdef process_large_csv (input_file, output_file ): """处理大CSV文件,内存友好""" with open (input_file, 'r' , encoding='utf-8' ) as fin, \ open (output_file, 'w' , encoding='utf-8' , newline='' ) as fout: reader = csv.DictReader(fin) writer = csv.DictWriter(fout, fieldnames=reader.fieldnames) writer.writeheader() for row in reader: row['Age' ] = str (int (row['Age' ]) + 1 ) writer.writerow(row) process_large_csv('users.csv' , 'users_updated.csv' )
姿势10:JSON文件处理 基础操作 1 2 3 4 5 6 7 8 9 10 11 import jsonwith open ('config.json' , 'r' , encoding='utf-8' ) as f: data = json.load(f) print (data['name' ]) data = {'name' : 'Alice' , 'age' : 25 , 'skills' : ['Python' , 'JavaScript' ]} with open ('output.json' , 'w' , encoding='utf-8' ) as f: json.dump(data, f, ensure_ascii=False , indent=2 )
格式化选项 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 import jsondata = { 'name' : '张三' , 'age' : 30 , 'hobbies' : ['reading' , 'coding' , 'gaming' ] } print (json.dumps(data)) print (json.dumps(data, ensure_ascii=False )) print (json.dumps(data, ensure_ascii=False , indent=2 )) print (json.dumps(data, ensure_ascii=False , sort_keys=True )) with open ('pretty.json' , 'w' , encoding='utf-8' ) as f: json.dump(data, f, ensure_ascii=False , indent=2 , sort_keys=True )
处理复杂数据类型 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 import jsonfrom datetime import datetimeclass DateTimeEncoder (json.JSONEncoder): def default (self, obj ): if isinstance (obj, datetime): return obj.isoformat() return super ().default(obj) data = { 'name' : 'Event' , 'time' : datetime.now(), 'participants' : ['Alice' , 'Bob' ] } json_str = json.dumps(data, cls=DateTimeEncoder, ensure_ascii=False ) print (json_str)def datetime_decoder (dct ): for key, value in dct.items(): if key.endswith('_time' ) or key == 'time' : try : dct[key] = datetime.fromisoformat(value) except : pass return dct data = json.loads(json_str, object_hook=datetime_decoder)
避坑指南:我踩过的10个坑 坑1:忘记关闭文件 1 2 3 4 5 6 7 8 f = open ('file.txt' , 'r' ) data = f.read() with open ('file.txt' , 'r' ) as f: data = f.read()
坑2:编码错误 1 2 3 4 5 6 7 with open ('chinese.txt' , 'r' ) as f: content = f.read() with open ('chinese.txt' , 'r' , encoding='utf-8' ) as f: content = f.read()
坑3:'w'模式清空文件 1 2 3 4 5 6 7 with open ('log.txt' , 'w' ) as f: f.write('new log' ) with open ('log.txt' , 'a' ) as f: f.write('new log\n' )
坑4:大文件一次性读取 1 2 3 4 5 6 7 8 with open ('huge_file.txt' , 'r' ) as f: content = f.read() with open ('huge_file.txt' , 'r' ) as f: for line in f: process(line)
坑5:Windows换行符问题 1 2 3 4 5 6 7 8 9 10 11 12 13 with open ('windows_file.txt' , 'r' ) as f: for line in f: print (line.strip()) with open ('file.txt' , 'r' , newline='' ) as f: for line in f: line = line.rstrip('\r\n' )
坑6:路径拼接错误 1 2 3 4 5 6 7 8 9 path = 'folder' + '/' + 'file.txt' from pathlib import Pathpath = Path('folder' ) / 'file.txt' import ospath = os.path.join('folder' , 'file.txt' )
坑7:文件不存在错误 1 2 3 4 5 6 7 8 9 10 11 with open ('config.txt' , 'r' ) as f: config = f.read() from pathlib import Pathconfig_file = Path('config.txt' ) if config_file.exists(): config = config_file.read_text() else : config = default_config
坑8:并发写入冲突 1 2 3 4 5 6 7 8 9 10 11 import fcntldef safe_write (filename, content ): with open (filename, 'a' ) as f: fcntl.flock(f.fileno(), fcntl.LOCK_EX) try : f.write(content) finally : fcntl.flock(f.fileno(), fcntl.LOCK_UN)
坑9:临时文件残留 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 f = open ('temp_xxx.txt' , 'w' ) import tempfilewith tempfile.NamedTemporaryFile(mode='w' , delete=True ) as f: f.write('temporary content' ) temp_dir = tempfile.mkdtemp() import shutilshutil.rmtree(temp_dir)
坑10:CSV写入空行 1 2 3 4 5 6 7 8 9 with open ('data.csv' , 'w' ) as f: writer = csv.writer(f) writer.writerow(['a' , 'b' ]) with open ('data.csv' , 'w' , newline='' ) as f: writer = csv.writer(f) writer.writerow(['a' , 'b' ])
实战案例:批量文件处理工具 案例1:批量重命名文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 from pathlib import Pathimport redef batch_rename (folder, pattern, replacement ): """批量重命名文件""" folder = Path(folder) for file in folder.iterdir(): if file.is_file(): new_name = re.sub(pattern, replacement, file.name) new_path = file.parent / new_name file.rename(new_path) print (f"{file.name} -> {new_name} " ) batch_rename('./images' , r'(.+)\.jpg' , r'2024_\1.jpg' )
案例2:统计代码行数 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from pathlib import Pathdef count_lines (folder, extension='.py' ): """统计代码行数""" total_lines = 0 file_count = 0 for file in Path(folder).rglob(f'*{extension} ' ): with open (file, 'r' , encoding='utf-8' ) as f: lines = len (f.readlines()) total_lines += lines file_count += 1 print (f"{file.name} : {lines} 行" ) print (f"\n总计: {file_count} 个文件, {total_lines} 行代码" ) return total_lines count_lines('./my_project' , '.py' )
案例3:日志分析工具 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 from pathlib import Pathfrom collections import Counterimport redef analyze_logs (log_dir ): """分析日志目录中的所有日志文件""" error_types = Counter() hourly_errors = Counter() for log_file in Path(log_dir).glob('*.log' ): with open (log_file, 'r' , encoding='utf-8' ) as f: for line in f: if 'ERROR' in line: error_match = re.search(r'ERROR (\w+)' , line) if error_match: error_types[error_match.group(1 )] += 1 time_match = re.search(r'(\d{2}):\d{2}:\d{2}' , line) if time_match: hourly_errors[time_match.group(1 )] += 1 print ("=== 错误类型统计 ===" ) for error, count in error_types.most_common(10 ): print (f"{error} : {count} 次" ) print ("\n=== 每小时错误分布 ===" ) for hour in sorted (hourly_errors.keys()): print (f"{hour} 时: {hourly_errors[hour]} 个错误" ) analyze_logs('./logs' )
案例4:文件同步备份 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 from pathlib import Pathimport shutilimport hashlibdef get_file_hash (filepath ): """计算文件MD5""" with open (filepath, 'rb' ) as f: return hashlib.md5(f.read()).hexdigest() def sync_folders (source, target ): """同步两个文件夹""" source = Path(source) target = Path(target) target.mkdir(exist_ok=True ) for src_file in source.rglob('*' ): if src_file.is_file(): rel_path = src_file.relative_to(source) tgt_file = target / rel_path if not tgt_file.exists(): tgt_file.parent.mkdir(parents=True , exist_ok=True ) shutil.copy2(src_file, tgt_file) print (f"新增: {rel_path} " ) elif get_file_hash(src_file) != get_file_hash(tgt_file): shutil.copy2(src_file, tgt_file) print (f"更新: {rel_path} " ) for tgt_file in target.rglob('*' ): if tgt_file.is_file(): rel_path = tgt_file.relative_to(target) src_file = source / rel_path if not src_file.exists(): tgt_file.unlink() print (f"删除: {rel_path} " ) sync_folders('./project' , './backup' )
性能优化技巧 技巧1:使用缓冲 1 2 3 4 5 with open ('large_file.txt' , 'r' , buffering=65536 ) as f: for line in f: process(line)
技巧2:批量写入 1 2 3 4 5 6 7 8 9 with open ('output.txt' , 'w' ) as f: for i in range (100000 ): f.write(f'Line {i} \n' ) lines = [f'Line {i} \n' for i in range (100000 )] with open ('output.txt' , 'w' ) as f: f.writelines(lines)
技巧3:使用内存映射 1 2 3 4 5 6 7 8 import mmapwith open ('huge_file.bin' , 'rb' ) as f: with mmap.mmap(f.fileno(), 0 , access=mmap.ACCESS_READ) as mm: data = mm[1000 :2000 ] position = mm.find(b'pattern' )
技巧4:异步IO(Python 3.7+) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 import asyncioimport aiofilesasync def async_read_file (filename ): """异步读取文件""" async with aiofiles.open (filename, 'r' , encoding='utf-8' ) as f: content = await f.read() return content async def async_write_file (filename, content ): """异步写入文件""" async with aiofiles.open (filename, 'w' , encoding='utf-8' ) as f: await f.write(content) async def process_multiple_files (filenames ): """并发处理多个文件""" tasks = [async_read_file(f) for f in filenames] contents = await asyncio.gather(*tasks) return contents filenames = ['file1.txt' , 'file2.txt' , 'file3.txt' ] results = asyncio.run(process_multiple_files(filenames))
推荐:AI Python零基础实战营 想系统学习Python文件操作和数据处理?
课程内容:
✅ Python基础语法 ✅ 文件读写与数据处理 ✅ CSV、Excel、JSON操作 ✅ 实战项目练习 🎁 限时福利 :送《Python编程从入门到实践》实体书
👉 点击了解详情
相关阅读 PS:文件操作是编程的基础功,掌握这些技巧,数据处理会轻松很多。记住核心原则:用with语句、指定编码、大文件逐行处理。
📚 推荐教材 主教材 :《Python 编程从入门到实践(第 3 版)》
📚 推荐:Python 零基础实战营 系统学习Python,推荐这个免费入门课程 👇
特点 说明 🎯 专为0基础设计 门槛低,上手快 📹 配套视频讲解 配合文章学习效果更好 💬 专属答疑群 遇到问题有人带 🎁 实体书赠送 优秀学员送《Python编程从入门到实践》
👉 点击免费领取 Python 零基础实战营
💬 联系我 主营业务 :AI 编程培训、企业内训、技术咨询
🎓 AI 编程实战课程 想系统学习 AI 编程?程序员晚枫的 AI 编程实战课 帮你从零上手!