At Meta, data pipelines often need to scan very large log exports from systems such as Scribe without reading the entire file into memory. Implement a streaming parser that processes a text file chunk-by-chunk and computes line-level statistics.
Given a file path file_path and an integer chunk_size, write a function that reads the file incrementally and returns:
Your solution must not call read() on the full file and must work correctly even when a line is split across multiple chunks.
file_path: string path to a UTF-8 text filechunk_size: positive integer number of bytes/characters to read per iteration[total_lines, non_empty_lines, longest_line_length]Example 1 Input: `file contents = "alpha beta
gamma
", chunk_size = 5Output:[4, 3, 5]`
Explanation: There are 4 lines total, 3 are non-empty, and the longest line has length 5.
Example 2
Input: file contents = "one very long line without newline", chunk_size = 4
Output: [1, 1, 34]
Explanation: The file has one line even without a trailing newline.
1 <= chunk_size <= 10^6