PYTHON
Remove Duplicate Elements from a List While Preserving Original Order
Learn Pythonic ways to efficiently remove duplicate items from a list, ensuring the relative order of the remaining unique elements is maintained.
my_list = [1, 2, 2, 3, 4, 1, 5, 3, 6]
# Method 1: Using dict.fromkeys (most Pythonic for hashable items)
# Dictionaries preserve insertion order in Python 3.7+ (and practically in 3.6)
unique_ordered_list_dict_fromkeys = list(dict.fromkeys(my_list))
print(f"Original list: {my_list}")
print(f"Unique (dict.fromkeys): {unique_ordered_list_dict_fromkeys}")
# Method 2: Manual approach for potentially unhashable items or explicit control
unique_elements = []
seen = set()
for item in my_list:
if item not in seen:
unique_elements.append(item)
seen.add(item)
print(f"Unique (manual loop): {unique_elements}")
How it works: Removing duplicates from a list while maintaining the original order is a common task. The most Pythonic way for lists of hashable items (like numbers, strings, tuples) is to use `dict.fromkeys()`. Since Python 3.7 (and practically 3.6), dictionaries preserve insertion order, so converting the list to `dict.fromkeys()` and then back to a list effectively filters out duplicates while keeping the first occurrence's order. For lists containing unhashable items (like lists or dictionaries), a manual loop with a `set` to track seen items is required.