Home    About   Contact
Twitter Facebook LinkedIn RSS
Home » Articles posted by Victoria O'Kane

In Choosing Styles, Write What We Don’t Know

In Choosing Styles, Write What We Don’t Know

In Choosing Styles, Write What We Don’t Know64 Responses

It’s more or less time to get that once-a-year, four weeks-particularly long festivity for authors: NaNoWriMo. Can you theme yourself to the thrills and emotional tension of composing a book more than 50,000 words and phrases in length within a single calendar month?

If you’re involved in NaNoWriMo this present year, you could experience a tiny bit stressed about regardless if you’re likely so as to end. Actually, you need to be anxious.

Here’s the simple truth: nearly 90 percent of people which launch NaNoWriMo don’t end.

How can you ensure that you don’t are unsuccessful? How will you triumph NaNoWriMo? On this page I’m sharing the three what exactly you need should you want to surface texture NaNoWriMo. (Trace: The majority of them may be accomplished in advance of NaNoWriMo formally will start on Nov 1.)

What Is NaNoWriMo?

NaNoWriMo (apparent nano-rye-moe) really is a not-very-abbreviation for National Unique Simply writing Calendar month, a traditions going back to 1999 in which folks come up with a magazine of 50,000 words or even more throughout the week of Nov.

Why Participate In Over The NaNoWriMo Pleasure

Inside The Compose Rehearse, we’ve at all times reinforced freelance writers engaged in NaNoWriMo. We’re huge devotees first explanation: we know it’s a superb possibility to apply your posting.

Convinced, you possibly will not publish a publishable new in just a thirty days despite the fact that quite a few bestselling books have emerge from NaNoWriMo, like Normal water for Elephants, Wool, The Evening Circus, and Cinder. Yet, each of the crafting instruction you will get just after centering on your publishing so strongly for the overall calendar month will certainly help you to a more suitable writer.

That’s why this holiday season, we would like to bring you to ultimately post a manuscript within a week along with us. I don’t consideration if you happen to write down fiction or low-fiction. You’re continue to asked.

Is November 2015 the 30 days you at long last end the ebook you’ve been looking to generate consistently?

And in case you decide on connecting to us, we are going to go all the way that may help you be area of the eleven percentage of individuals who complete their guides during the course of NaNoWriMo.

So what do you think? Are you going to publish a book in a 30 days along with us?

I’m going to be a part of the 11 % who finish off their books during the course of NaNoWriMo. Link up with me!Tweet thisTweet

3 Top reasons to Engage In NaNoWriMo

Why do each and every year folks distill the torment of simply writing an ebook into one of the many most busy christmas seasons of the year? And more importantly, why must you join in the enjoyment/torment?

There are certainly a couple of reasons to sign up in NaNoWriMo:

1. Grow to be an writer in any four week period. The bait of NaNoWriMo is the concept that in mere 30 days you would go from as being a low-source to accomplishing things most people will only dream of: now making a guide.

2. Concentrate seriously on publishing. Formulating a guide at a thirty day period is literally great, merely because the ultimate way to finish a task is to put your whole look at it, and NaNoWriMo will allow you to place emphasis completely on a single thing to acquire a shorter, overwhelming phase.

3. Get away from (a part of) the anguish of writing a magazine. Then finally, because it’s a online community circumstance with large numbers of us participating while doing so, the motivation and assist of other authors gets rid of among the painful sensation linked to formulating a guide.

However, despite having all the awesome arguments, only about 11 per-cent of people wrap up their NaNoWriMo novels. At the same time those that commence NaNoWriMo can have very good motives, the veracity of crafting a guide during a four week period remains to be definitely, certainly tough.

Exactly how do you NOT be unsuccessful at NaNoWriMo? How do you at long last finish off your e-book?

Get and print out the full size NaNoWriMo calendar right here

Learn How To NOT Fail at NaNoWriMo

If you need to get NaNoWriMo, you have to do a trio of items:

1. Get yourself a approach.

In preparing for struggle I have got always discovered that plans are unnecessary, but intending is imperative. Dwight D. Eisenhower

Within the last year or so, I’ve been training a small range of authors to write down a manuscript. Using twelve contributors, one hundred percent of these everyone has accomplished their make a reservation for. That’s right, all of those.

Once I determine my buddies in submitting this, they’re all astounded. They are aware of how difficult it is to compose an ebook, and the very thought of a making method by using a completely rate of success is almost unbelievable.

The most significant area of our procedure is most likely the first two months, when we set up a publication offer. A novel proposal is essentially a holistic episode prepare for an ebook, with parts focused upon the posting procedure plus the book’s material, but also the advertising and marketing of this reserve also.

How then does one set up a guide plan?

Very important substances within a manual insurance policy for NaNoWriMo. Specifically what does your publication system have? Whether or not you’re writing fiction or low-stories, you will find a trio of essential materials (with a little extra aspects if you’re sensation ambitious):

  • Premise. The premise would be the most important concept of it, and in lots of ways it acts as the foundation on your manual. Since it’s the building blocks, it’s vital to get it proper. I’ve used up 3 weeks doing business each day on the principle to get a reserve in the past. In stories and particularly screenwriting the premise is furthermore termed as a logline (or whatline), a 1-sentence summary of the protagonist, principal trouble, and creating. In non-fiction, the premise is most likely the fundamental argument you’re building inside the book. Want to know significantly more about how to post an outstanding premise? Look at our ideal idea tutorial.
  • USP. The USP, or distinctive marketing undertaking, is sorts of a uncommon promotion key phrase, but it’s intent is almost always to explain what is going to create your reserve special and definitely worth examining. Whilst considering the market seems annoying for those writers, it’s important to contemplate what will make any reserve unique.
  • Summarize. An annotated describe with your major plan areas or hints are you finding your for starters opportunity to consider what your arrange might be about. For novelists, here’s a cheatsheet for the principal plan details.

Benefit elements to your handbook arrange. You may get by without any these look at here, but they’re so valuable I would personally suggest them.

  • Guide. The review splits up your novel into parts, conveying just about every in a very concise paragraph. At a creative, you will likely bust your manual up into a few performs: Take action I, create; Work II, confrontation; and Function III, decision. Inside a low-stories handbook, the summary will likely split your handbook into three or more divisions detailing the matter, the perfect solution is, and the application of that solution.
  • Demographics. That is your audience? Determining that will be interested in your e-book is clearly necessary for online marketing, nonetheless notice that aquiring a clean envision of my crowd is a fantastic motivator. Since I envision how my posting will change people today, it assists me stay focused on becoming my e-book directly to them the minute I can.
  • Rivals. The opposition area quite often displays 3 or more other publications, if possible ones that are excellent in the industry, that are exactly like your own property. It makes clear together what exactly very similar relating to your guidebook and what on earth is various. Whilst you may think, “My handbook is completely completely unique,” this is really a terrible disposition to acquire. But if your manual is entirely not the same as any reserve, then many people most likely aren’t gonna know why they have to look at it.

It is easy to have this e-book approach created by October 1, and even though your program might not exactly go very far beyond day 1 of NaNoWriMo, the hours you take intending your publication will likely be some of the worthwhile numerous hours.

Read more concerning how to provide a formulating insurance policy for NaNoWriMo, be a part of the free range on How To Write an ebook in any Period. It starts on October 13!

Message For Those That DON’T LIKE Setting out: For you if you don’t like the thought of plotting your experience, you don’t establish a manual schedule so that you can slavishly observe your outline and take away each of the room for serendipity. A novel method isn’t a leash. It’s a reference point.

You create a want to become the perfect chart for when you buy suddenly lost (which is unavoidable when you’re composing a manuscript).

“No arrange survives exposure to the enemy,” said you Prussian normal.

On Nov 1, your e-book arrange will go to articles. And that’s definitely fine. Yet, there’s no more effective way to shell out October than dealing with your make a reservation for schedule.

2. Purchase a squad.

Amazing authors have frequently printed in town. Ernest Hemingway obtained the Paris within the 1920s. Jack Kerouac, Allen Ginsberg, and the other Beats got New York inside the post-battle 1950s. C.S. Lewis and J.R.R. Tolkien have the Inklings.

To be able to do something as tough as prepare a guide in the thirty days, you will need a teams.

It is going devoid of mentioning that you need the purchase in with the persons nearest to you, your husband or wife, specifically if you have youngsters, family and friends, and close friends. Yet you also require other freelance writers who may have eliminated or have a tendency through the same thing.

Additional writers in “your organization” furnish you with 5 things:

Inspiration. I have a lot of close friends who will be freelance writers, if they inform me they only finalized some other reserve, it inspires me to operate trickier by myself plans. Whenever you get to hear one of your fellow NaNoWriMo individuals just enjoyed a 5,000 word moment, it should inspire anyone to come up with more without help guide.

Encouragement. There’s no acquiring close to it: producing a book is tough. When you reached a snag into your plot or have no clue points to come up with upcoming, you’ll really need your squad to inspire you together with say, “It can be done. You’ll find it all out. Just remember to keep crafting.”

Guidance. When investing in to a issue you can’t determine, you could ask your best friend or maybe wife or husband for help, but a fellow copy writer will frequently have superior creative ideas and suggestion than anyone who has certainly not tried crafting a book in their life.

Accountability. In the course of November, you’ll wish to stop smoking. Possessing a number of people to maintain you answerable to end the things you started off may just be the difference between succeeding and finish malfunction.

Not merely will getting a crew help you end NaNoWriMo, it’s just wonderful! Who is familiar with? The working relationships you create using the other authors you come in contact with throughout NaNoWriMo could last through out your lifestyle.

3. Enter flow.

Creating depends upon circulation, and this is especially valid through NaNoWriMo, the place authoring rather quickly is critical to achievements.

How will you discover your tempo and produce rapidly for the duration of NaNoWriMo?

The main element that should decrease your listing and burst your beat in October is perfectionism.

Perfectionism is definitely the rest that whatever you produce has to be great, grammatically correct, unmatched, unique, and free of typos Fine NOW.

Perfectionism seems like this: Oh no! I recently misspelled that background character’s title. I have to go back and fix it NOW! If I pass away in the midst of scripting this and someone says it I’ll be SO ashamed!

It s outstanding to wish to be a greater freelance writer, but if you want to conclude NaNoWriMo, brilliance will have to wait around around until Dec.

Yet beginning to feel perfectionistic? On this site s why a editor states you shouldn t revise all through NaNoWriMo.

Perfectionism is the very good destroyer of authoring flow.Tweet thisTweet

Whereas perfection would be the wonderful destroyer of beat, how does one get into a proper publishing rhythm? Here are some hints:

  • Hardly ever miss out on two hours consecutively. It’s bound to happen that you will lose your term add up 1 day in Nov. In the end, an issue critical comes up or you’ll get writer’s stop and won’t be ready to come up with something. Even while it’s good to overlook at some point, never overlook two working days consecutively. It’s just too much to recover and you’ll probably turn out to be stopping.
  • Focus on the sensation of your posting. Meditate relating to the sense of your hands mainly because they reach the tips, the look and feel with your control changing speedily in the computer keyboard. Be mindful of this inhaling. Concentration on the thoughts of your own creating given that it happens.
  • Get some new font color to illumination grey, making your composing difficult to see. Any time you can’t watch your crafting, you won’t possess the urge to interrupt your tempo to edit.
  • Transform your typeface to 4 pt. If one makes the font not big enough to find out, you can’t personal-revise.
  • Switch off your laptop keep track of. For all with very good entering capabilities.
  • Specify the lumination on your hard drive computer screen so decreased one can t observe the words. Similar have an affect on as more than.
  • Remove your get rid of major. Right here s a youtube video on easy methods to take away an important within your key pad.

Acquire NaNoWriMo Together With The Generate Apply

Participating in NaNoWriMo might possibly be quite possibly the most inspiring issues you’ve possibly finished with your crafting.

This November, we are going to do a whole lot of cooler, new stuff to help you wrap up producing a magazine in a very calendar month. If you’ve truly wanted to produce a magazine or take part in NaNoWriMo, you won’t would like to miss this.

And when you need extra responsibility, enroll for our 100 % free range, Creating a magazine with a Month. You’ll discover all you should know concerning how to summarize and prepare a guide in any four weeks.

Warning me up for those series, Creating a novel with a Period right here

Expect you’ll be a part of us for this purpose selection! It will eventually only be provided for individuals who enroll for it, so be sure to register these days.

Have you contemplated engaging in NaNoWriMo? What the heck is your most significant concern about authoring a magazine at a four week period? Tell me in the feedback segment.

Perform

At this time, get yourself started on the premise of your NaNoWriMo book (you can get a great deal more guidance on publishing a premise on this page). In a to 3 phrases, identify the story plot or most important issue of your respective guidebook.

When you finally come up with your very first draft of your respective premise, place it inside suggestions portion for evaluations. Just in case you article, make sure to give evaluations in the premises of your own other authors.

Have fun!

Share

How exactly to BS The Right Path Via A Faculty Paper

How exactly to BS The Right Path Via A Faculty Paper

Most eve Community school and ry college gives online college courses and the needs of its pupil population to meet. Some actually supply overall stage applications that may be received online. Taking these lessons interests individuals with full time jobs, a family life also to anybody who desires of completing courses without having to attend regular class classes, the ease.

Share

List of Compare Essay Topics

List of Compare Essay Topics

Change Report Where to Find Cheap Caribbean Holiday Packages Caribbean trips are those that happen to even the nations that surround the Caribbean beach or the countries. The Caribbean destinations are a chain of 7,000 small or large islands that stretch approximately 2,000 miles (3,200km) between the Gulf Coast of Florida in North America and also the coast of Venezuela in South America.

Share

Living on the edge

Living on the edge

The previous posts in this series sketched out how the route from 10 Gbps to 100 Gbps and beyond approaches the theoretical capacity limit of a DWDM channel. Any system operated at the edge of the envelope tends to fail spectacularly, and high channel capacity optics are no exception. Lower bit rate transceivers had a narrow range of degraded operation where bit error rate (BER) would increase as the received signal level approached the lower limit. As we push the channel capacity to the limit, operating margins are reduced, and the margin for error all but disappears.

From Information Theory 101, we know that increasing throughput by a factor of 10x from 10 Gbps to 100 Gbps would require a 10x improvement in OSNR, with all other things being equal. Transmitting 100 Gbps with more sophisticated PM-DPSK modulation, rather than simple OOK, provided a 4x reduction in symbol rate by coding two bits per symbol in both polarization modes. That left a 2.5x gap that needed to be filled for full backwards compatibility of 100 Gbps waves on existing systems designed for 10 Gbps per DWDM channel.

If this OSNR gap could not be filled, then deployment of 100 Gbps waves would require costly and disruptive re-engineering of installed networks, limiting its utility. Once again, technology originally developed and deployed for wireless communications provided a solution. The secret weapon used to close this gap was improved forward error correction (FEC). But FEC is like a double-edge sword that cuts both ways.

By adding redundant bits to the bit stream, FEC allows bit errors from forward transmission to be corrected at the receiver, without requesting retransmission on a backward channel. This is analogous to redundant RAID arrays in disk drive storage. By including an additional disk drive, and adding redundant data to each disk, a RAID disk array can tolerate complete failure of any one disk without data loss. Likewise, by breaking a bit stream into blocks and adding redundant bits, FEC can correct a limited number of random bit errors, recovering the corrupted receive data as originally transmitted, without loss.

But like everything else, FEC has limits. For a given amount of redundant bits added, a corresponding amount of bit errors can be corrected. Once the input bit error rate reaches a particular FEC algorithm’s limit, the error correction process breaks down, and bit errors appear in the output data. The FEC algorithm fails completely if the bit error rate increases further, and the output data becomes unusable. This catastrophic failure mode exacerbates the so-called “cliff effect” of rapid degradation in digital transmission on noisy links.

Without FEC, the bit error rate would increase more gradually as the OSNR decreased. With FEC, the BER remains near zero as the OSNR degrades, because the algorithm cleans up low-level bit errors. When the received BER stretches the ability of the FEC algorithm to compensate, smaller decreases in OSNR will produce bigger increases in output BER with FEC, than without. So, FEC delays the onset of degraded performance, but it can only do this by reducing the margin for error.

Getting throughput closer to the theoretical OSNR limit requires more efficient FEC algorithms. With these more efficient algorithms, bit errors are corrected to an even lower level of OSNR. FEC does not move the theoretical OSNR limit, however; it just allows error free operation closer to that edge. Once OSNR approaches the limit, the more efficient FEC algorithm still breaks down, but the slippery slope is even steeper.

The key take away here is that empirical “plug-and-pray” deployments of optical gear become even more untenable as data rates increase, leading to brick wall failure modes that provide little or no warning of impending failure. Many operators have foolishly relied on degradation of output BER to serve as a warning system. Increasing dependence on FEC to improve throughput makes this pure folly.

Without proper design up front, rigorous validation of the as-built system against the design parameters, and constant vigilance over the system lifetime, reliable operation will just be an elusive goal. The margin for degraded operation, where intervention can preempt catastrophic failure, becomes vanishingly small as the channel capacity is stretched. Poor practices that have worked in the past will no longer produce the desired results.

The rapid increase in BER near the OSNR limit with FEC does not matter in the case of a fiber cut, but this sudden failure mode is relatively rare. It is much more common to see a gradual degradation of the fiber link over a time span of several days or months. This can be caused by an accumulation of many small macro-bending losses over time, or a single mechanical instability that slowly gets worse (e.g. a loose connector, cracked fiber, or kinked cable). With proper network performance monitoring, the erosion in optical margin or quality factor (Q-factor) can be detected and addressed at the network operations level in the normal course of business.

Without pro-active maintenance, problems propagate up the layers in the network stack. Adverse influences accumulating in the network at layer-0 eventually produce bit errors at layer-1. In an IP network, this causes CRC errors at layer-2 that require packet retransmission under TCP at layer-4. This leads to sluggish application performance at layer-7, which generates angry phone calls at layer-8. At this point, the problem is no longer a purely technical issue, because too many people outside the networking organization are adversely affected.

With FEC, this cascading failure chain snaps more quickly. The next post in this series will address how to make FEC an asset, rather than a liability, and expand on improving network reliability as more complex transmission schemes are necessarily employed to increase fiber capacity.

Doug Haluza,

CTO, Metro|NS

Ed note, this is the fourth post in a series. The previous post is here. The first post is here.

Share

Beyond 100 Gig

Beyond 100 Gig

The previous posts in this series outlined how coherent optics stretch the capacity of existing 10 Gbps DWDM systems to 100 Gbps per channel without major surgery on the fiber network. But that is probably as much as commonly deployed 50 GHz channel DWDM systems will carry, at least over any meaningful distance on existing fiber. So how can exponential bandwidth growth continue at reasonable cost?

The new standard 100 Gbps PM-DPSK technology exploits phase and polarization dimensions to quadruple the number of bits transmitted per symbol interval compared to standard 10 Gbps OOK encoding. Coherent receivers using sophisticated DSP algorithms provide the additional performance improvements needed for a 10x increase of throughput on a DWDM channel engineered to carry 10 Gbps. But that brings us close enough to the theoretical channel capacity of existing systems to make further dramatic improvements untenable.

Stepping back in time, recall that DWDM was originally a disruptive technology that dramatically increased the capacity of each fiber (or more specifically, the optical amplifiers needed to offset fiber attenuation). Channel spacing of 200 GHz initially provided enough wiggle room for drift in the early lasers. As laser stability improved, the window size was reduced to 100 and then to 50 GHz, which is now the most common format. A further reduction to 25 GHz was never really fully realized, at least in part because it became obvious that channel capacity and not laser stability would become the limiting factor.

To increase DWDM capacity beyond 100 Gbps per 50 GHz channel, what are the options?

    • 400 Gbps waves may never be widely deployable in 50 GHz due to OSNR.
    • 200 Gbps in 50 GHz may be possible with a lot of work, but cost/benefit is iffy.
    • 400 Gbps in 100 GHz is a better bet, but only for older installed systems.
    • Deploying 100 GHz now would just take us back in time, leading to a dead-end.

One problem with increasing throughput within the DWDM channel grid is the unusable dead-bands between channels in the optical filters. These can waste about 30% of the available bandwidth. Four adjacent channels with 200 Ghz spacing could support one terabit per second (Tbps). This would be applicable to installed systems because they typically incorporate band splits or channel groups of four 50 GHz channels. So, 1 Tbps in less than 200 GHz bandwidth is a logical next step that would provide a further 2.5x improvement in overall DWDM spectral efficiency. But that really is the end of the line for existing DWDM systems.

The DWDM channel grid was established to standardize components that had to be factory tuned to specific wavelengths. The standard grid allowed these components to be mass-produced, reducing costs. This paradigm has enabled tremendous expansion in optical networking for over a decade. But in the future we will move to grid-less multi-terabit transmission.

Tunable lasers in the transmitters have since alleviated problems associated with producing, distributing, and sparing a multitude of fixed wavelength laser modules. Now coherent detection can use these tunable lasers to create a tunable receiver. So there is no need to maintain the fixed DWDM grid. Once we drop the DWDM framework, we can move to a more flexible network architecture. We will soon be able to eliminate fixed channel optical filters, and move to dynamic optical multiplexing.

DWDM, which gave us two orders of magnitude improvement in fiber capacity in the past, will become a hindrance in expanding system capacity in the future. Instead of being an enabling technology moving us forward, conventional DWDM will become a legacy technology. It will continue to be a workhorse enabling bandwidth expansion in the near term, but its long term prospects are limited. Instead of being deployed on the network side of transmission systems operating above 1 Tbps, fixed grid DWDM will only be seen on the client facing sub-rate interfaces.

Doug Haluza,

CTO, Metro|NS

Ed note, this is the third post in a series. The previous post is here. The first post is here.

Share

100G DWDM

100G DWDM

Last post, I reviewed how coherent optics allowed 40 Gbps waves to be dropped into existing 10 Gbps DWDM systems without major modifications. That was good news for network operators who had a much more difficult upgrade path from 2.5 to 10 Gbps. There’s more good news: the optical magicians have pulled another rabbit out of their hat. The new generation of 100 Gbps transponders will also play with 10 and 40 Gbps waves in existing 50 Ghz DWDM windows. The bad news is that it looks like there are no more rabbits in the hat.

At 100 Gbps, the optical to electrical conversion is problematic, because processing a 100 Gbps native stream would require very specialized electronics today. One way to mitigate these problems is to divide the 100 Gbps stream using wavelength division multiplexing into 10 x 10 Gbps or 4 x 25 Gbps optical channels. These lower speed streams can be transmitted by separate lasers and processed using less specialized optoelectronics. This works well for short-range links where fiber capacity is relatively inexpensive. For longer reach where system capacity is valuable, and suitable lasers are expensive, a native 100 Gbps optical channel using a single laser is desirable.

But increasing modulation rate using the same method is not a viable option for upgrading existing networks, making more sophisticated modulation schemes necessary. Encoding two bits per symbol doubles the data rate without increasing the optical bandwidth, or sensitivity to dispersion. Encoding two of these signals, one in each polarization mode of the fiber, allows a further doubling of the data rate, still with the same bandwidth and dispersion tolerance. This scheme, known as dual-polarization quadrature phase-shift keying (DP-QPSK), is now the standard for commercial development of long-haul 100 Gbps on a single wavelength.

Encoding four bits per symbol interval not only enables transmission using a single channel, it also facilitates signal processing without expensive ultra high-speed electronics. The four bits can be processed as four parallel and uncorrelated 25 Gbps payloads on the line side, and then multiplexed into a single 100 Gbps serial handoff on the drop side.

Decoding a polarization multiplexed signal presents a problem, though. Ordinary single mode fiber does not maintain polarization state along its length. So, complex and expensive dynamic polarization controllers were needed in the past to align the receiver with the transmitted polarization state in the optical domain. A coherent detector moves the polarization state into the electrical domain, allowing it to be estimated by the DSP algorithm. The problem of receiving the two scrambled polarization modes is analogous to transmitting data in free space using two antennas and two receivers, known in wireless communications as multiple-input, multiple-output (MIMO). Algorithms developed for MIMO have been adapted to decode the scrambled polarization state in a coherent receiver, making polarization multiplexing feasible.

With these advancements, 100 Gbps DP-QPSK waves can be added to an existing DWDM system engineered for 10 Gbps. In fact, 100 Gbps transponders using all digital dispersion compensation could be used on links that would require dispersion compensation just to pass 10 Gbps. This can bring new life to older fiber routes that are capacity limited and not easily upgraded, or add value to old fiber obtained on long-term IRU.

Of course, there has to be a down side, and naturally it’s cost. Dual polarization adds optical elements and doubles the number of transmitter and receiver elements. The coherent detector doubles the number of receiver elements again. Each of the four receiver elements must employ high speed ADC and sophisticated real-time DSP. So the cost of 100 Gbps DP-QPSK transponders will probably not be too much less than ten times the cost of 10 Gbps, when they become available. Right now the standard is just a multi-source agreement to develop common components that each optical equipment vendor can use in their proprietary implementation. These components are just entering production now.

That does not mean that you can’t deploy 100 Gbps over a single DWDM wave now. Ciena has 100 Gbps line cards for the OME 6500 platform that have been deployed for more than a year. The former Nortel engineers who developed these had to use an additional trick to split the payload into 12.5 Gbps slices so readily available integrated circuit technology could be used to decode the data. In addition to splitting the signal in phase and polarization, they also split the optical carrier into two sub-carriers using frequency division multiplexing in the optical domain. Each sub-carrier carries half the data a la WDM, but the two carriers are only separated by 20 GHz so they can fit in a single 50 Ghz DWDM window.

Technology adapted from wireless to optical communication has allowed an order of magnitude growth in capacity of existing DWDM networks, without costly and disruptive upgrades to the installed plant. But this has taken us pretty close to the theoretical throughput limit under the Shannon–Hartley theorem given the typical parameters of existing large-scale networks. It is possible to get higher data rates with better OSNR, or with more bandwidth; but it’s doubtful that we will see a 400 Gbps transponder suitable for general deployment in existing 50 Ghz DWDM amplified networks originally engineered to carry only 10 Gbps.

Doug Haluza,

CTO, Metro|NS

Ed. Note: this is the second post in a series. Click here for the first post. The next post is here.

Share

Chip Scale Atomic Clock

Chip Scale Atomic Clock

Precise timing has many applications in telecommunications. But the precision of commercial systems is typically limited by the precision of a vibrating quartz crystal—a tiny chunk of rock. More precise atomic clocks using the natural frequency of individual atoms have been confined to laboratories and special applications because of their cost, size, and power consumption. But a chip-scale atomic clock (CSAC) that can fit on a PC board has just become commercially available, and this changes everything.

Applications that needed more precise timing than a stand-alone quartz crystal provides can use precision time transmitted by GPS satellites. But the GPS signal can only be used to discipline the quartz clock. That’s like trying to discipline a frisky dog on a leash. Every time it runs to one side of the path, it has to be yanked back. So it can’t run away, but it’s not a very stable reference. An atomic clock is like a trained dog that follows the path without pulling on the leash as much.

There are lots of military applications for a small, low-power atomic clock including unmanned aerial vehicles, and man-carried portable systems. Undersea exploration is a natural fits as well, because GPS is not available under water. It can also be used in telecom applications where getting a GPS signal is costly, like colocation facilities.

One interesting application is performance monitoring in low-latency networks. Measuring round-trip latency from one end of an out-of-service link with a loopback at the far end is relatively easy, because the transmitter and receiver use the same clock. But that only gives a best-case baseline. To test a live system under load, you can tap the signal at various points and time-stamp the packets, then compare the time stamps to continuously measure latency. But the latency measurements are only as good as the time stamps, which are subject to error from variation in the clocks at the measurement points.

High-end live network monitors currently use heated quartz crystals to minimize thermal effects. They can also take in a GPS signal to discipline the clocks. But this only allows precise latency measurements at different places of microsecond order. With cut-through switches now forwarding packets with sub-microsecond latency, there is a gap in the precision of measurement needed. The new CSAC provides time with about two orders of magnitude better stability than the best quartz crystal, and can therefore close this gap.

Not only is the performance of the CSAC two orders of magnitude better than quartz, its size, cost, and power consumption are at least an order of magnitude better than previous low-end atomic clocks. So this is truly a revolutionary, not an evolutionary, breakthrough. There are probably many varied applications for this new technology yet to be discovered as well.

Doug Haluza, C.T.O., Metro|NS

Thoughts? Add a comment below.

Share

Terabit Switch on a Chip

Terabit Switch on a Chip

Networking gear is trending away from custom ASICs to merchant silicon, and the newest generation of these switching chips has crossed the terabit per second threshold. A single chip can now switch 64 full-duplex 10 Gbps wire-speed flows without blocking, for a total of 1.28 Tbps, or just under one billion packets per second.  Switch latency is around one microsecond for both Layer-2 and Layer-3 forwarding, and the latency is consistent between any pair of ports because they are all driven off the same chip.

Vendors are now delivering this technology in top-of-rack (ToR) switches positioned for high-performance computing (HPC) clusters. One example is the new Force10 S4810 ToR switch which supports 48 dual-speed 1/10 GbE SFP+ and four 40 GbE QSFP+ ports in a 1 RU “pizza box” footprint. IBM and Cisco have similar offerings based on the same Broadcom Trident chip, but you must wait a while to get your hands on the Nexus 3064 from Cisco (unless you already have a substantial order booked).

Compare this to a legacy architecture Cisco 6509-V-E chassis that delivers similar throughput using 21 RU—that’s half a rack, with an order of magnitude greater power and cooling load. The single-chip solutions only draw a few hundred watts, so special power outlets are not needed. Standard equipment includes redundant hot-swappable power supplies and fans, with front/back airflow compatible with hot/cold aisle data centers.

The SFP+ and QSFP+ ports support Direct Attach cables without media conversion for ultra low latency on short reach connections. They also accept a range of pluggable optics suitable for metro optical networks, or directly driving wavelength division multiplex systems. Dual speed SFP+ slots support any mix of 1/10 GbE on copper or fiber, with a simple plug-and-play upgrade path.

Expect the economies of scale of ubiquitous Ethernet and PCI bus to squeeze InfiniBand (IB) out of its niche in HPC, the same way switched Ethernet crowded out ATM. Direct Attach provides switched connections between multiple devices, and PCIe handles point-to-point connections. We don’t see sustained interest in IB for high-frequency trading, where it should wash out relatively quickly because refresh cycles there are measured in months, not years.

Chassis-based Ethernet switches with pluggable cards will continue to be displaced by these fixed-port, modular interface boxes based off reference designs from the silicon merchants. This transition, limited only by Moore’s Law and the ability to productize apace, is likewise analogous to the move HPC made off custom supercomputer chassis to arrays of commodity PCs. Initial capital cost and ongoing power and space expense are lowered by dumping switch fabric backplanes for single-board designs.

Once basic switch functionality becomes commoditized by merchant silicon, vendors will have to differentiate their offerings with features, services, and relationships. That should be a positive development for everyone in the networking space.

Doug Haluza, C.T.O. Metro|NS

 

Share

Friends, Customers, Colleagues: Welcome

Friends, Customers, Colleagues:  Welcome

Our team is excited to launch our website and officially present Metro|NS to the telco community. As many of you know, it’s been a busy and fantastic few months. After the sale of Lexent, our team regrouped (took some time to unwind) and came back refreshed and ready to tackle our next project, Metro Network Services!! We’ve spent the past four months strengthening our relationships with local service providers and equipment vendors, as well as researching the newest technologies impacting optical and wireless transport. After our initial R&D phase, we, as a team, are confident and ready to bring these solutions to market with you.

I hope you’ll spend some time today familiarizing yourself with our new site. As a project management and integration firm, we’ve organized our site to help our clients and prospective customers drill down to specific services we offer, as well as highlight some past solutions we’ve done to give an idea of our breadth and scope of work. You’ll also be able to find relevant and up-to-date blog posts here. Check back periodically for write ups from our experts on what we’re seeing in the field. We hope this blog serves as a conversation point for all of you, and we look forward to reading and responding to your comments.

Finally, on behalf of the team here, I’d like to thank you all for your continued support of Metro|NS. So many of you have been with us from Hugh O’Kane Electric through Lexent and now Metro|NS, and we are excited about the opportunity to continue working together. We’re looking forward to the future and energized by the prospect of helping you improve your network and grow your business!

 

Looking forward to speaking with you soon.

Victoria O’Kane,

Co-Founder and Vice President Operations Metro|NS

 

Share

Search

Recent Tweets

  • No tweets were found.