PYTHON
Remove Duplicates from a List While Preserving Order
Discover Pythonic methods to eliminate duplicate elements from a list without altering the original order of the remaining unique items, a common data cleaning task.
original_list = [1, 3, 2, 3, 1, 4, 5, 2]
# Method 1: Using a set and a list comprehension (most Pythonic for preserving order)
seen = set()
unique_list_ordered = [x for x in original_list if x not in seen and not seen.add(x)]
print(f"Unique list (order preserved): {unique_list_ordered}")
# Method 2: Using dict.fromkeys (works in Python 3.7+ and preserves order)
# This works because dictionary keys are inherently unique and insertion order is preserved in 3.7+
unique_list_dict_keys = list(dict.fromkeys(original_list))
print(f"Unique list (dict.fromkeys): {unique_list_dict_keys}")
# Method 3: Simple for loop (more verbose)
unique_items_loop = []
seen_loop = set()
for item in original_list:
if item not in seen_loop:
unique_items_loop.append(item)
seen_loop.add(item)
print(f"Unique list (loop): {unique_items_loop}")
How it works: This snippet illustrates several Pythonic ways to remove duplicate elements from a list while maintaining the original order of the items. The most common and efficient method (Method 1) involves using a `set` to keep track of already seen elements combined with a list comprehension. Alternatively, for Python 3.7 and newer, `dict.fromkeys()` (Method 2) can be used concisely, leveraging the fact that dictionary keys are unique and preserve insertion order.