In this paper we exploit the amplitude diversity provided by two sensors to achieve blind separation of two speech sources. We propose a simple and highly computationally efficient method for separating sources that are W-disjoint orthogonal (W-DO), that are sources whose time-frequency representations are disjoint sets. The Degenerate Unmixing and Estimation Technique (DUET), a powerful and efficient method that exploits the W-disjoint orthogonality property, requires extensive computations for maximum likehood parameter learning. Our proposed method avoids all the computations required for parameters estimation by assuming that the sources are "cross high-low diverse (CH-LD)", an assumption that is explained later and that can be satisfied exploiting the sensors settings/directions. With this assumption and the W-disjoint orthogonality property, two binary time-frequency masks that can extract the original sources from one of the two mixtures, can be constructed directly from the amplitude ratios of the time-frequency points of the two mixtures. The method works very well when tested with both artificial and real mixtures. Its performance is comparable to DUET, and it requires only 2% of the computations required by the DUET method. Moreover, it is free of convergence problems that lead to poor SIR ratios in the first parts of the signals. As with all binary masking approaches, the method suffers from artifacts that appear in the output signals.