Efficient Methods for Removing Duplicate Elements from ArrayList in Java

Dec 03, 2025 · Programming · 7 views · 7.8

Keywords: Java | ArrayList | Deduplication

Abstract: This article provides an in-depth exploration of various methods for removing duplicate elements from ArrayList in Java, focusing on the efficient LinkedHashSet approach that preserves order. It compares performance differences between methods, explains O(n) vs O(n²) time complexity, and presents case-insensitive deduplication solutions to help developers choose the most appropriate implementation based on specific requirements.

Introduction

In Java programming, removing duplicate elements from collections is a common requirement. ArrayList, as the most widely used dynamic array implementation, offers flexible data storage but lacks built-in deduplication functionality. Developers need to implement deduplication logic manually, which involves considerations of algorithmic efficiency, order preservation, and special comparison requirements.

Problem Analysis

The example code in the original question attempts to remove duplicates by comparing adjacent elements:

List<String> list = new ArrayList<String>();
list.add("Krishna");
list.add("Krishna");
list.add("Kishan");
list.add("Krishn");
list.add("Aryan");
list.add("Harm");

for (int i = 1; i < list.size(); i++) {
    String a1 = list.get(i);
    String a2 = list.get(i-1);
    if (a1.equals(a2)) {
        list.remove(a1);
    }
}

This approach has several issues: first, it only checks adjacent duplicates and cannot handle non-adjacent duplicates; second, modifying the list during iteration may cause index confusion; most importantly, its time complexity is O(n²), making it inefficient for large datasets.

Efficient Solutions

Using LinkedHashSet to Preserve Order

The most recommended solution leverages Set collection characteristics, particularly LinkedHashSet:

list = new ArrayList<String>(new LinkedHashSet<String>(list));

The key advantages of this method are:

  1. O(n) Time Complexity: LinkedHashSet add operations have average O(1) time complexity, requiring only one pass through the list
  2. Order Preservation: LinkedHashSet maintains the original insertion order while removing duplicates
  3. Code Simplicity: Single line of code completes the deduplication

The implementation works because LinkedHashSet automatically filters duplicates during construction, then converts back to a list via ArrayList's constructor. This approach is significantly more efficient than methods using List#contains or List#remove (O(n²) time complexity).

Performance Comparison

Time complexity comparison of different methods:

For a list with 1000 elements, the O(n²) method may require nearly a million comparisons, while the O(n) method needs only about a thousand operations.

Advanced Application Scenarios

Case-Insensitive Deduplication

In some cases, case-insensitive string comparison is required for deduplication. This can be achieved using TreeSet with a custom comparator:

Set<String> toRetain = new TreeSet<String>(String.CASE_INSENSITIVE_ORDER);
toRetain.addAll(list);
Set<String> set = new LinkedHashSet<String>(list);
set.retainAll(new LinkedHashSet<String>(toRetain));
list = new ArrayList<String>(set);

This solution has O(n log n) time complexity, slightly slower than the basic LinkedHashSet method but meeting special comparison requirements. If order is not important, it can be simplified to:

Set<String> set = new TreeSet<String>(String.CASE_INSENSITIVE_ORDER);
set.addAll(list);
list = new ArrayList<String>(set);

Deduplication of Custom Objects

For custom class objects, proper implementation of equals() and hashCode() methods is essential:

class Person {
    private String name;
    private int age;
    
    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        Person person = (Person) o;
        return age == person.age && Objects.equals(name, person.name);
    }
    
    @Override
    public int hashCode() {
        return Objects.hash(name, age);
    }
}

List<Person> persons = new ArrayList<>();
// Add elements...
persons = new ArrayList<>(new LinkedHashSet<>(persons));

Practical Recommendations

1. Choose Method Based on Requirements: Use LinkedHashSet for basic deduplication with order preservation; consider TreeSet for special comparison logic

2. Consider Memory Usage: Set methods create temporary collections; consider memory overhead for extremely large lists

3. Thread Safety Considerations: ArrayList and HashSet are not thread-safe; synchronization is needed in multi-threaded environments

4. Performance Testing: For critical performance paths, conduct actual performance tests to choose the most suitable implementation

Conclusion

Removing duplicate elements from ArrayList is a common task in Java development. By using LinkedHashSet, developers can efficiently perform deduplication with O(n) time complexity while preserving element order. For more complex comparison requirements, such as case-insensitive deduplication, TreeSet with custom comparators can be employed. Understanding the performance characteristics and applicable scenarios of different methods helps in writing more efficient and robust code.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.