Efficient Algorithm for Computing Product of Array Except Self Without Division

Dec 03, 2025 · Programming · 26 views · 7.8

Keywords: Array Product Algorithm | Prefix-Suffix Decomposition | O(N) Time Complexity | Space Complexity Optimization | Division-Free Computation

Abstract: This paper provides an in-depth analysis of the algorithm problem that requires computing the product of all elements in an array except the current element, under the constraints of O(N) time complexity and without using division. By examining the clever combination of prefix and suffix products, it explains two implementation schemes with different space complexities and provides complete Java code examples. Starting from problem definition, the article gradually derives the algorithm principles, compares implementation differences, and discusses time and space complexity, offering a systematic solution for similar array computation problems.

Problem Background and Constraints

In algorithm interviews and programming practice, problems involving computations between array elements are frequently encountered. One classic problem is: given an integer array nums, return a new array products where products[i] equals the product of all nums[j] for j != i. For example, for input array [1, 2, 3, 4, 5], the output should be [120, 60, 40, 30, 24].

The challenge of this problem lies in two key constraints: first, the algorithm must complete in O(N) time complexity where N is the array length; second, division operations cannot be used. If division were allowed, the problem would become trivial—simply compute the product of the entire array and then perform division for each element. However, with division prohibited, more ingenious mathematical and algorithmic solutions are required.

Core Algorithm Principle

The key to solving this problem is recognizing that each products[i] can be decomposed into the product of two parts: the product of all elements to the left of index i (prefix product) and the product of all elements to the right of index i (suffix product). Specifically, for any index i:

products[i] = (nums[0] * nums[1] * ... * nums[i-1]) * (nums[i+1] * nums[i+2] * ... * nums[N-1])

Based on this observation, two auxiliary arrays can be constructed: leftProducts storing the product of all elements to the left of each position, and rightProducts storing the product of all elements to the right of each position. These arrays can be built through two linear scans, and then multiplying them element-wise yields the final result.

Taking a four-element array [a, b, c, d] as an example:

Algorithm Implementation Schemes

Based on the above principle, two main implementation schemes can be designed, differing in their space complexity.

Scheme 1: Using Additional Auxiliary Arrays (O(N) Space)

This scheme is the most intuitive, creating two auxiliary arrays of length N to store prefix and suffix products respectively. Here is the Java implementation code:

public int[] productExceptSelf(int[] nums) {
    int n = nums.length;
    int[] leftProducts = new int[n];
    int[] rightProducts = new int[n];
    int[] result = new int[n];
    
    // Compute left products
    int left = 1;
    for (int i = 0; i < n; i++) {
        leftProducts[i] = left;
        left *= nums[i];
    }
    
    // Compute right products
    int right = 1;
    for (int i = n - 1; i >= 0; i--) {
        rightProducts[i] = right;
        right *= nums[i];
    }
    
    // Combine results
    for (int i = 0; i < n; i++) {
        result[i] = leftProducts[i] * rightProducts[i];
    }
    
    return result;
}

This implementation clearly demonstrates the algorithm logic: the first loop builds the prefix product array from left to right, the second loop builds the suffix product array from right to left, and the third loop combines the results. The time complexity is O(N) with three array traversals; the space complexity is O(N), requiring storage for two auxiliary arrays.

Scheme 2: In-place Modification of Result Array (O(1) Extra Space)

If O(1) space complexity is required (excluding the space for the returned result array), the above algorithm can be optimized to use only the result array itself for intermediate calculations:

public int[] productExceptSelf(int[] nums) {
    int n = nums.length;
    int[] result = new int[n];
    
    // Phase 1: Store left products in result
    int left = 1;
    for (int i = 0; i < n; i++) {
        result[i] = left;
        left *= nums[i];
    }
    
    // Phase 2: Accumulate right products
    int right = 1;
    for (int i = n - 1; i >= 0; i--) {
        result[i] *= right;
        right *= nums[i];
    }
    
    return result;
}

The key to this optimized version is: in the first phase, the result array is used as temporary storage for left products; in the second phase, right products are multiplied during traversal. This avoids creating additional auxiliary arrays, reducing extra space complexity to O(1) while maintaining O(N) time complexity.

Algorithm Analysis and Comparison

Both schemes satisfy the problem requirements of O(N) time complexity without using division. Their main difference lies in space usage:

  1. Scheme 1 requires O(N) extra space to store two auxiliary arrays, but the code logic is clearer and easier to understand and debug.
  2. Scheme 2 requires only O(1) extra space, but the code is slightly more compact and may need more comments to explain its working principle.

In practical applications, the choice between schemes depends on specific requirements: if code readability is the primary concern, Scheme 1 is more suitable; if memory usage has strict constraints, Scheme 2 is better. Both schemes embody the divide-and-conquer approach—decomposing a complex problem into independently solvable subproblems (left products and right products) and then combining the results.

Edge Cases and Considerations

When implementing the algorithm, the following edge cases should be considered:

  1. Empty or single-element arrays: For empty arrays, an empty array should be returned; for single-element arrays, according to problem definition, an array containing only 1 should be returned.
  2. Arrays containing zeros: The algorithm correctly handles arrays containing zeros since no division operations are involved.
  3. Large number overflow: If the product of array elements might exceed integer range, using big number types or modulo operations should be considered.
  4. Negative number handling: The algorithm has no special requirements for negative numbers and correctly handles sign changes.

Here is a complete example with edge case testing:

// Test different edge cases
int[][] testCases = {
    {},                    // Empty array
    {5},                   // Single element
    {0, 1, 2},             // Contains zero
    {-1, 2, -3, 4},        // Contains negative numbers
    {1, 2, 3, 4, 5}        // Standard case
};

for (int[] nums : testCases) {
    int[] result = productExceptSelf(nums);
    System.out.println(Arrays.toString(result));
}

Extended Applications and Variants

The core idea of this algorithm can be extended to other similar problems:

  1. Multi-dimensional extension: For two-dimensional matrices, computing the product of all elements except the current one requires handling row and column directions separately.
  2. Cumulative statistics: Similar methods can be used for computing cumulative sums, cumulative maximums, and other statistics.
  3. Sliding window products: Combined with sliding window techniques, products within fixed-size windows excluding the center element can be computed.

Understanding this algorithm lies in mastering the "prefix-suffix decomposition" thinking pattern, which is highly useful in solving many array processing problems.

Conclusion

This paper provides a detailed analysis of the efficient algorithm for computing the product of array elements except self under the constraints of O(N) time complexity and without using division. By decomposing the problem into computations of prefix and suffix products, two implementation schemes with different space complexities are presented. The first scheme uses auxiliary arrays with clear and understandable code; the second scheme modifies the result array in-place with higher space efficiency. Both schemes embody the divide-and-conquer approach and the space-time trade-off principles in algorithm design. This algorithm not only solves a specific interview problem but, more importantly, demonstrates a general methodology for handling array cumulative computation problems.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.