How to Read Last Line of File Using Python
-
1. Using a Simple
for
Loop (Basic Approach) -
2. Using
readlines()
(Not Memory-Efficient for Large Files) -
3. Using
seek()
andreadlines()
(Optimized for Large Files) -
4. Using
linecache
(Useful for Indexed Files) -
5. Using
deque
(Best for Large Files) -
6. Using
subprocess
to Calltail
(Unix/Linux Only) -
7. Using
read()
andsplitlines()
- Conclusion

Reading the last line of a file is a common task in Python, especially when working with log files, real-time data streams, or large datasets. Python does not provide a built-in function to directly access the last line of a file, but there are several efficient methods to achieve this.
This comprehensive guide explores multiple ways to read the last line of a file in Python, ranging from simple approaches suitable for small files to optimized solutions for handling large files efficiently. By the end of this article, you will understand how to select the best method based on file size, performance, and memory efficiency. If you’re also interested in detecting when a file has reached its end, check out our guide on [Detecting EOF in Python]({{relref “/HowTo/Python/python end of file.en.md”}}).
Here are the most effective methods to achieve this, along with their advantages and trade-offs.
1. Using a Simple for
Loop (Basic Approach)
If the file is small, you can read each line iteratively and store the last read line:
with open("file.txt", "r") as f:
for line in f:
pass # Skip processing each line
last_line = line
print(last_line)
This method reads the file line by line and stores the last line in memory. The pass
statement is used to iterate without additional processing. However, this approach is inefficient for large files because it requires reading the entire file.
Pros:
- Simple and easy to understand.
- No extra memory usage beyond the last line.
Cons:
- Inefficient for large files as it reads the entire file sequentially.
- Slower for files with millions of lines.
If you need to modify or append new content after retrieving the last line, check our guide on [Appending Data and Writing to a File in Python]({{relref “/HowTo/Python/python write to file append new line.en.md”}}).
2. Using readlines()
(Not Memory-Efficient for Large Files)
readlines()
reads all lines into memory as a list, allowing easy access to the last line. For more details on reading an entire file into a list, check out [Reading a File into a List in Python]({{relref “/HowTo/Python/python read text file into list.en.md”}}).
with open("file.txt", "r") as f:
lines = f.readlines()
if lines: # Ensure the file is not empty
last_line = lines[-1].strip()
print(last_line)
This approach works by loading the entire file into memory and then using negative indexing to access the last line. However, it is not memory-efficient for large files.
Pros:
- Simple and requires minimal coding.
- Works well for small to medium-sized files.
Cons:
- Not suitable for large files as it loads all lines into memory.
3. Using seek()
and readlines()
(Optimized for Large Files)
For large files, reading the entire file is inefficient. Instead, we can use seek()
to jump to the end and read backward in chunks:
def get_last_line(filepath):
with open(filepath, 'rb') as f:
f.seek(0, 2) # Move to the end of the file
filesize = f.tell()
offset = -100 # Start reading from the last 100 bytes
while True:
if filesize + offset > 0:
f.seek(offset, 2)
lines = f.readlines()
if len(lines) >= 2: # Ensure we have a complete last line
return lines[-1].decode().strip()
offset *= 2 # Increase offset if the last line is not found
last_line = get_last_line('file.txt')
print(last_line)
This method efficiently reads only the necessary part of the file instead of loading the entire content.
It works by first seeking to the end of the file using seek(0, 2)
. Here, the seek()
method takes two parameters: the first parameter, 0
, specifies the offset position, and the second parameter, 2
, indicates the reference point, which in this case is the end of the file.
By using seek(0, 2)
, the file pointer is moved directly to the end of the file without reading its entire content. This allows us to efficiently start reading backward from the last position.
If the chunk doesn’t contain the entire last line, the offset is doubled, and we read a larger portion of the file. This process continues until we find a complete line. The use of progressively increasing offsets ensures that we do not read too much data unnecessarily while still capturing the last line efficiently.
This approach is particularly useful when working with very large files because it avoids loading the entire file into memory. Instead, it reads only small portions of the file until the last line is found, making it highly memory-efficient and ideal for processing large datasets or log files.
Pros:
- Memory-efficient for large files.
- Avoids loading the entire file into memory.
Cons:
- Slightly more complex to implement.
- Might need tuning for specific file sizes.
4. Using linecache
(Useful for Indexed Files)
The linecache
module provides a way to read specific lines efficiently. If you need to retrieve specific lines instead of just the last line, check out our guide on [Reading Specific Lines from a File]({{relref “/HowTo/Python/How to read specific lines from a file in Python.en”}}).
import linecache
def get_last_line(filepath):
with open(filepath, 'r') as f:
line_count = sum(1 for _ in f) # Count total lines
return linecache.getline(filepath, line_count).strip()
last_line = get_last_line('file.txt')
print(last_line)
This approach pre-counts the number of lines in the file before fetching the last line using linecache.getline()
.
The line counting is done using sum(1 for _ in f)
, which efficiently iterates through the file, adding 1 for each line encountered. This method works because Python treats a file as an iterable, where each iteration retrieves a new line until the end of the file is reached.
By using sum(1 for _ in f)
, we ensure that the counting process does not store any lines in memory, making it much more efficient than reading all lines into a list with readlines()
.
However, since this approach requires scanning the entire file, it may still be slow when dealing with very large files.
Pros:
- Efficient for files where line counts are known or stored.
Cons:
- Requires scanning the entire file to count lines.
5. Using deque
(Best for Large Files)
collections.deque
is optimized for reading the last n
lines efficiently:
from collections import deque
def get_last_line(filepath):
with open(filepath, 'r') as f:
last_line = deque(f, maxlen=1).pop().strip()
return last_line
last_line = get_last_line('file.txt')
print(last_line)
Using deque
, we only store the last line in memory, making this approach highly efficient.
The deque
object is a double-ended queue that allows fast appends and pops from both ends. When opened with maxlen=1
, it ensures that only the most recent (last) line is retained in memory, making it ideal for handling large files without excessive memory consumption.
Since it reads the file line by line, it processes only the required data, rather than loading the entire file into memory like readlines()
. This makes it a great alternative when dealing with log files or real-time data streams where memory efficiency is critical.
While this method is straightforward, it requires an understanding of the deque
data structure, which is not commonly used in basic file handling scenarios. However, for large files, this technique provides a simple and highly efficient way to retrieve the last line while minimizing resource usage.
Pros:
- Memory-efficient and fast.
- Works well for very large files.
Cons:
- Requires knowledge of the
deque
data structure.
6. Using subprocess
to Call tail
(Unix/Linux Only)
On Unix/Linux, the tail
command is an efficient way to get the last line:
import subprocess
def get_last_line(filepath):
result = subprocess.run(['tail', '-n', '1', filepath], capture_output=True, text=True)
return result.stdout.strip()
last_line = get_last_line('file.txt')
print(last_line)
This method leverages the highly optimized Unix tail
command, making it the fastest approach for large log files on Linux systems.
The tail
command is specifically designed to efficiently retrieve the last few lines of a file, making it a perfect choice for log monitoring or real-time data processing. Since it operates at the system level, it bypasses Python’s file handling overhead, leading to faster execution times compared to reading the file within Python.
By using subprocess.run()
, Python executes the tail -n 1
command and captures its output. The -n 1
flag ensures that only the last line of the file is retrieved, minimizing the data processed.
This method is particularly advantageous when dealing with constantly growing log files, as it allows you to quickly access the most recent entry without having to parse the entire file. However, because it relies on an external system command, it is limited to Unix-based systems and does not work natively on Windows.
Pros:
- Extremely fast and efficient.
- Ideal for real-time log file monitoring.
Cons:
- Platform-dependent (only works on Unix/Linux systems).
7. Using read()
and splitlines()
with open('file.txt', 'r') as f:
content = f.read()
lines = content.splitlines()
if lines:
last_line = lines[-1]
print(last_line)
This approach reads the entire file into a single string, splits it into lines, and retrieves the last line.
Pros:
- Simple and beginner-friendly.
Cons:
- Not memory-efficient for large files.
Conclusion
Reading the last line of a file in Python can be accomplished using different methods, each suited for different file sizes and use cases:
- For small files: Simple methods like
readlines()
andsplitlines()
work well. - For large files: Efficient methods like
seek()
,deque
, andsubprocess
(tail
) are better suited. - For Unix/Linux systems: The
tail
command is the fastest approach.
Method | Memory Efficiency | Complexity | Best For |
---|---|---|---|
for Loop |
Low | Simple | Small files |
readlines() |
Low | Simple | Small to medium files |
seek() + readlines() |
High | Moderate | Large files |
linecache |
Moderate | Moderate | Indexed files |
deque |
High | Moderate | Large files |
subprocess (tail command) |
High | Moderate | Unix/Linux systems |
read() + splitlines() |
Low | Simple | Small files |
By understanding these approaches, you can choose the best method for your application, ensuring optimal performance and memory efficiency. Whether you’re dealing with log files, sensor data, or any other structured data, Python provides multiple flexible ways to retrieve the last line efficiently.
Maisam is a highly skilled and motivated Data Scientist. He has over 4 years of experience with Python programming language. He loves solving complex problems and sharing his results on the internet.
LinkedIn