discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss-gnuradio] Tri-mode Ethernet MAC on USRP2 (was Re: interfacing a


From: Jared Casper
Subject: [Discuss-gnuradio] Tri-mode Ethernet MAC on USRP2 (was Re: interfacing a DSP array card to USRP2)
Date: Sat, 15 May 2010 02:01:07 -0700

On Fri, Apr 9, 2010 at 7:09 AM, Matt Ettus <address@hidden> wrote:
>> My understanding is that it takes 3 BUFGs and one DCM for tri-mode (maybe
>> one more of each for RGMII support but I
>> don't see that) and, between this and other USRP2 needs, you ran into the
>> limit of 8.  Is that accurate?  Or would
>> 10/100/1000 support would take more than 3...
>
> I can't say how many clocks a _good_ 10/100/1G system would need, but the
> Opencore required 4.  One thing to keep in mind is that while there are
> theoretically 8 global clocks in the S3, other limitations mean that it can
> be difficult to use all 8.
>

I spent some time today looking at this today and thought I would
share my findings here, for posterity if nothing else...

If you are careful, you can do a 10/100/1000 MAC with two clocks.  One
for TX which is either TX_CLK for 10/100, which comes from the phy, or
GTX_CLK for 1000 which is supplied by the MAC (in the USRP2, the phy
conveniently gives us a 125 MHz clock we can use).  The other clock is
for RX and is always RX_CLK supplied by the phy.  Using four clocks is
useful because for 10/100, only four bits of the eight bit data bus
are used. In that case you can use the appropriate clock divided by
two for all your logic that works on eight bits and just do the
combining/splitting in the reconciliation layer, thus most of your
logic is agnostic as to what mode you are in.  You can instead just
always use the two clocks directly and do careful pipelining that can
stall if in 10/100 mode.

That said, for the USRP2, those "other limitation" get in the way.
There are eight clock buffers on the USRP2's FPGA, four on the
"bottom" and four on the "top".  Each buffer can multiplex between two
input clocks.  The bottom four end up like this:
BUFGMUX0: clk_to_mac (125 MHz)
BUFGMUX1: GMII_RX_CLK
BUFGMUX2: cpld_clk
BUFGMUX3: ser_rx_clk

So to do tri-mode, you'd like to just add GMII_TX_CLK to the other
input of BUFGMUX0 and be done with it (even though it isn't connected
to a clock input pin, it isn't far from BUFGMUX0 so it shouldn't be
too much of a problem to use the general routing resources to get it
to the BUFGMUX, like what is currently being done with cpld_clk).
Unfortunately, BUFGMUX0 and BUFGMUX1 share inputs, so if you have two
clocks going into BUFGMUX0, those same two clocks need to go into
BUFGMUX1.  This means BUFGMUX0 and 1 would both be taken up to mux
between GMII_TX_CLK and the 125 MHz clock, something that is
absolutely unavoidable since 10/100 mode simply uses a different clock
than 1000 mode for tx.  So Jeff was right above, in this case it would
take three BUFG's to do tri-mode and thus you are out of room for
clocks on the bottom of the chip.

So to do tri-mode, you would have to route one of the clocks (probably
cpld_clk since it is only 25 MHz) from the pin on the bottom to one of
the two open BUFGMUX's on the top of the chip, but that would go
through a lot of routing on the FPGA and introduce significant skew
between the clock at the pin and the clock actually hitting the flops,
which may or may not cause problems.  I assume you could also just
ditch the ser_rx_clk and remove that functionality if you didn't need
it.  Also, GMII_RX_CLK can no longer use the dedicated routing path to
it's BUFGMUX since it only has a dedicated path to BUFGMUX0 and 1, but
it's pretty close to BUFGMUX2 or 3 so that shouldn't be as problematic
as routing one to the other side of the chip.

tl;dr From what I can gather, it is definitely possible to do a
tri-mode mac on the USRP2, but you would either have to introduce
significant skew to cpld_clk (which is slow enough that it may not
matter), or get rid of ser_rx_clk and it's associated functionality.

Jared Casper



reply via email to

[Prev in Thread] Current Thread [Next in Thread]