2026/2/19 22:30:58
网站建设
项目流程
新手要如何让网站被收录,展厅施工公司,做网站建设销售辛苦吗,DNF做钓鱼网站因公众号更改推送规则#xff0c;请点“在看”并加“星标”第一时间获取精彩技术分享点击关注#互联网架构师公众号#xff0c;领取架构师全套资料 都在这里0、2T架构师学习资料干货分上一篇#xff1a;2T架构师学习资料干货分享大家好#xff0c;我是互联网架构师#xff…因公众号更改推送规则请点“在看”并加“星标”第一时间获取精彩技术分享点击关注#互联网架构师公众号领取架构师全套资料 都在这里0、2T架构师学习资料干货分上一篇2T架构师学习资料干货分享大家好我是互联网架构师来源juejin.cn/post/7529035047552335907在互联网应用中大文件上传是一个常见而棘手的挑战。传统的单文件上传方式在面对大文件时经常面临超时、内存溢出等问题。本文将深入探讨如何利用Spring Boot实现高效的分块上传方案解决大文件传输痛点。01为什么需要文件分块上传当文件上传超过100MB时传统上传方式存在三大痛点网络传输不稳定 单次请求时间长容易中断服务器资源耗尽 大文件一次性加载导致内存溢出上传失败代价高 需要重新上传整个文件分块上传的优势减小单次请求负载支持断点续传并发上传提高效率降低服务器内存压力02分块上传核心原理03Spring Boot实现方案dependencies dependency groupIdorg.springframework.boot/groupId artifactIdspring-boot-starter-web/artifactId /dependency dependency groupIdcommons-io/groupId artifactIdcommons-io/artifactId version2.11.0/version /dependency /dependenciesRestController RequestMapping(/upload) publicclassChunkUploadController{ privatefinal String CHUNK_DIR uploads/chunks/; privatefinal String FINAL_DIR uploads/final/; /** * 初始化上传 * param fileName 文件名 * param fileMd5 文件唯一标识 */ PostMapping(/init) public ResponseEntityString initUpload( RequestParam String fileName, RequestParam String fileMd5){ // 创建分块临时目录 String uploadId UUID.randomUUID().toString(); Path chunkDir Paths.get(CHUNK_DIR, fileMd5 _ uploadId); try { Files.createDirectories(chunkDir); } catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(创建目录失败); } return ResponseEntity.ok(uploadId); } /** * 上传分块 * param chunk 分块文件 * param index 分块索引 */ PostMapping(/chunk) public ResponseEntityString uploadChunk( RequestParam MultipartFile chunk, RequestParam String uploadId, RequestParam String fileMd5, RequestParam Integer index){ // 生成分块文件名 String chunkName chunk_ index .tmp; Path filePath Paths.get(CHUNK_DIR, fileMd5 _ uploadId, chunkName); try { chunk.transferTo(filePath); return ResponseEntity.ok(分块上传成功); } catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(分块保存失败); } } /** * 合并文件分块 */ PostMapping(/merge) public ResponseEntityString mergeChunks( RequestParam String fileName, RequestParam String uploadId, RequestParam String fileMd5){ // 1. 获取分块目录 File chunkDir new File(CHUNK_DIR fileMd5 _ uploadId); // 2. 获取排序后的分块文件 File[] chunks chunkDir.listFiles(); if (chunks null || chunks.length 0) { return ResponseEntity.badRequest().body(无分块文件); } Arrays.sort(chunks, Comparator.comparingInt(f - Integer.parseInt(f.getName().split(_)[1].split(\\.)[0]))); // 3. 合并文件 Path finalPath Paths.get(FINAL_DIR, fileName); try (BufferedOutputStream outputStream new BufferedOutputStream(Files.newOutputStream(finalPath))) { for (File chunkFile : chunks) { Files.copy(chunkFile.toPath(), outputStream); } // 4. 清理临时分块 FileUtils.deleteDirectory(chunkDir); return ResponseEntity.ok(文件合并成功 finalPath); } catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(合并失败 e.getMessage()); } } }核心依赖关键控制器实现高性能文件合并优化当处理超大文件10GB以上时需要避免将所有内容加载到内存// 使用RandomAccessFile提高性能 publicvoidmergeFiles(File targetFile, ListFile chunkFiles)throws IOException { try (RandomAccessFile target new RandomAccessFile(targetFile, rw)) { byte[] buffer newbyte[1024 * 8]; // 8KB缓冲区 long position 0; for (File chunk : chunkFiles) { try (RandomAccessFile src new RandomAccessFile(chunk, r)) { int bytesRead; while ((bytesRead src.read(buffer)) ! -1) { target.write(buffer, 0, bytesRead); } position chunk.length(); } } } }04前端实现关键代码Vue示例// 5MB分块大小 const CHUNK_SIZE 5 * 1024 * 1024; /** * 处理文件分块 */ functionprocessFile(file) { const chunkCount Math.ceil(file.size / CHUNK_SIZE); const chunks []; for (let i 0; i chunkCount; i) { const start i * CHUNK_SIZE; const end Math.min(file.size, start CHUNK_SIZE); chunks.push(file.slice(start, end)); } return chunks; }asyncfunctionuploadFile(file) { // 1. 初始化上传 const { data: uploadId } await axios.post(/upload/init, { fileName: file.name, fileMd5: await calculateFileMD5(file)// 文件MD5计算 }); // 2. 分块上传 const chunks processFile(file); const total chunks.length; let uploaded 0; awaitPromise.all(chunks.map((chunk, index) { const formData new FormData(); formData.append(chunk, chunk, chunk_${index}); formData.append(index, index); formData.append(uploadId, uploadId); formData.append(fileMd5, fileMd5); return axios.post(/upload/chunk, formData, { headers: {Content-Type: multipart/form-data}, onUploadProgress: progress { // 更新进度条 const percent ((uploaded * 100) / total).toFixed(1); updateProgress(percent); } }).then(() uploaded); })); // 3. 触发合并 const result await axios.post(/upload/merge, { fileName: file.name, uploadId, fileMd5 }); alert(上传成功: ${result.data}); }分块处理函数带进度显示的上传逻辑05企业级优化方案1. 断点续传实现服务端增加检查接口GetMapping(/check/{fileMd5}/{uploadId}) public ResponseEntityListInteger getUploadedChunks( PathVariable String fileMd5, PathVariable String uploadId) { Path chunkDir Paths.get(CHUNK_DIR, fileMd5 _ uploadId); if (!Files.exists(chunkDir)) { return ResponseEntity.ok(Collections.emptyList()); } try { ListInteger uploaded Files.list(chunkDir) .map(p - p.getFileName().toString()) .filter(name - name.startsWith(chunk_)) .map(name - name.replace(chunk_, ).replace(.tmp, )) .map(Integer::parseInt) .collect(Collectors.toList()); return ResponseEntity.ok(uploaded); } catch (IOException e) { return ResponseEntity.status(500).body(Collections.emptyList()); } }前端上传前检查const uploadedChunks await axios.get( /upload/check/${fileMd5}/${uploadId} ); chunks.map((chunk, index) { if (uploadedChunks.includes(index)) { uploaded; // 已上传则跳过 returnPromise.resolve(); } // 执行上传... });2. 分块安全验证使用HmacSHA256确保分块完整性PostMapping(/chunk) public ResponseEntity? uploadChunk( RequestParam MultipartFile chunk, RequestParam String sign // 前端生成的签名 ) { // 使用密钥验证签名 String secretKey your-secret-key; String serverSign HmacUtils.hmacSha256Hex(secretKey, chunk.getBytes()); if (!serverSign.equals(sign)) { return ResponseEntity.status(403).body(签名验证失败); } // 处理分块... }3. 云存储集成MinIO示例Configuration publicclassMinioConfig{ Bean public MinioClient minioClient(){ return MinioClient.builder() .endpoint(http://minio:9000) .credentials(minio-access, minio-secret) .build(); } } Service publicclassMinioUploadService{ Autowired private MinioClient minioClient; publicvoiduploadChunk(String bucket, String object, InputStream chunkStream, long length)throws Exception { minioClient.putObject( PutObjectArgs.builder() .bucket(bucket) .object(object) .stream(chunkStream, length, -1) .build() ); } }06性能测试对比我们使用10GB文件进行测试结果如下07最佳实践建议分块大小选择内网环境10MB-20MB移动网络1MB-5MB广域网500KB-1MB定时清理策略Scheduled(fixedRate 24 * 60 * 60 * 1000) // 每日清理 publicvoidcleanTempFiles(){ File tempDir new File(CHUNK_DIR); // 删除超过24小时的临时目录 FileUtils.deleteDirectory(tempDir); }限流保护spring: servlet: multipart: max-file-size:100MB#单块最大限制 max-request-size:100MB08结语Spring Boot实现文件分块上传解决了大文件传输的核心痛点结合断点续传、分块验证和安全控制可构建出健壮的企业级文件传输方案。本文提供的代码可直接集成到生产环境根据实际需求调整分块大小和并发策略。说到底程序从职场角度看公司这样做很可能是想“信息差”捞点回本。建议保存所有当年的交接记录、邮件、IM聊天截图必要时走仲裁流程不然这种事开了先例下一个就可能是别人员写代码要留注释职场上做事也要留痕迹。只有手里握着证据才能不被随便背锅。1、2T架构师学习资料干货分享2、10000TB资源阿里云盘牛逼3、基本涵盖了Spring所有核心知识点总结· END ·最后关注公众号互联网架构师在后台回复2T可以获取我整理的 Java 系列面试题和答案非常齐全。如果这篇文章对您有所帮助或者有所启发的话帮忙扫描上方二维码关注一下您的支持是我坚持写作最大的动力。求一键三连点赞、转发、在看