Pages

Sunday 31 March 2013

On linear accelerators.

Here we shall determine the required power and size for a linear accelerator (whether of particles or massive objects) to deliver a chosen energy per shot.

As ever, a few assumptions to focus the problem and make life simpler:
  • The accelerator fires discrete shots, not a continuous beam. If it's firing a massive object this will always be the case, while particle accelerators vary.
  • The accelerating force is limited by compressive strength of the accelerator structure. This is similar to how I handled spacecraft acceleration limits previously. Tensile strength is usually lower than compressive strength so support in compression seems reasonable.
  • The accelerator structure is a prism. Obvious choice assumption.
  • The projectile is fired at ultrarelativistic speeds, by which I mean in excess of 0.95c. For a particle accelerator this is natural. For a projectile accelerator I've explained previously that I feel this is necessary for it to be worthwhile as a weapon.
  • Recoil can be neglected. This will be true if the accelerator's physically connected to something with enough mass.
I'm going to always call what the accelerator fires a projectile, regardless of whether it's a solid object or a bunch of detached particles. At the speeds involved it actually won't make any difference to the terminal ballistics, as to quote xkcd's what if, "the bonds holding the sphere together are completely irrelevant, it’s just a collection of carbon atoms".

I'm also going to always be working in the reference frame of the accelerator's structure.

Considering the force to be exerted on the projectile by some component (an electromagnet, perhaps) that is physically supported by the accelerator structure being loaded in compression, then

F = σcA

Where σc is of course the compressive strength.

The general definition of force, valid in special relativity as well as Newtonian mechanics, is the time derivative of momentum. With constant force, then for a projectile with no initial momentum and final momentum p, accelerated over time t,

F = p/t

In the ultrarelativistic limit, energy E is given by

E = pc

The time available for the acceleration depends of course on the length of the accelerator. Since we are taking the ultrarelativistic limit the projectile velocity is approximately c, and thus for an accelerator of length d,

t = d/c

Putting all the above equations together, we can express the projectile momentum and energy as depending on the volume and compressive strength of the accelerator

V = Ad
p = Ft = σcAd/c

E = σcV

Power is the time derivative of energy. Since energy is directly proportional to momentum, the constant time derivative of momentum (constant force) implies the constant time derivative of energy, ie constant power. This is thus given simply by

P = E/t = Ec/d

For a worked example, I'll consider a desired yield of 1 megaton of TNT, with a diamond accelerator structure or cylindrical shape and a length of 1 kilometre.

E is 4.2 x 1015 Joules.
σc is 1.2 x 1010 Pascals.
d is 1000 metres.

V = E/σc = 350000 m3
P = 1.3 x 1021 W

A = 350 m2
r = 10.6 m

The accelerator radius is quite modest, indeed positively svelte, though the overall size and thus mass are considerable, the latter being over a million tonnes. The power required, however, is extreme, many orders of magnitude above anything humanity has yet created.

The consequences here are relatively simple. For fixed material properties, the energy of the projectile depends only on the accelerator's volume, or equivalently its mass. The power drawn, by contrast, is lower for a long slender accelerator than for a short and bulky one. Therefore, assuming that lower power draw is desired, linear accelerators should be made as long as possible, and even then systems need to be capable of extreme bursts of power. Of course if the weapon is desired to fire in the direction the spacecraft accelerates, the previously established limitations on overall craft length come into play.

If the first assumption is violated, a more sophisticated treatment will be required. A continuous beam, or at least one long compared to the accelerator, would I expect bring the power requirements down drastically while still being able to deliver considerable energy to the target.

If the second assumption is violated, the accelerator might be made more slender, but the power requirements will still be the same since they depend only on accelerator length and not on the details of its construction.

If the third assumption is violated, I suppose you'd have to use calculus to handle the varying forces. I'm not sure why a non-prismatic shape would be used though.

If the fourth assumption is violated, then again calculus will probably be required to handle the varying velocity, unless the speeds are low enough for a Newtonian constant acceleration treatment. For an accelerator of fixed length, reducing projectile speed while increasing mass to compensate and give the same final energy will increase the time for acceleration and thus lower the power requirements.

If the fifth assumption being violated, it will manifest as an inefficiency since some of the energy input goes into accelerating the accelerator itself rather than the projectile.

Next week I will consider the case of a circular accelerator, which will have advantages and drawbacks compared to the linear variety.

Monday 25 March 2013

On superconducting cables.

Here we shall determine the power transmission capacity of a superconducting cable, as a function of the cable size and material properties.

Running late again. I always was bad with leaving stuff until the last moment then finding it took more work than I expected. Hey, somewhere in the world it is still Sunday night!

Even though superconducting cables have zero electrical resistance, they still have a critical current. Put too much current through and it will stop superconducting, and if it's not properly controlled this can result in damage. Along with the operating voltage, this will determine the maximum power that can be put through the cable.

Starting assumptions this time.
  • Circular cross section to the cable. It's the obvious shape.
  • The operating voltage is limited only by the insulation of the cable.
  • The superconductor is Type-II. Type-II superconductors can carry current through their whole bulk, and as such have a well-defined critical current density in Amps per unit area. Type-I superconductors confine current to their surface and thus the critical current depends on a wire's circumference not its cross-sectional area. They also have much smaller critical currents.
  • The superconductor has zero electrical resistance. This isn't a tautology; a phenomenon called flux creep allows Type-II superconductors to have low but non-zero resistance while still superconducting!
We will consider a cable with overall radius r, a fraction f of that being the superconductor, the remainder of the cable's radius being the insulator.

The critical current depends on the cross-sectional area of the superconductor and its critical current density Jc.

I = π(fr)2Jc = πf2r2Jc

The maximum voltage is limited by the dielectric strength, d, of the insulator. This is the electric field strength, measured in volts per metre, above which the insulator will stop insulating and current will discharge across it in a phenomenon called dielectric breakdown, permanently damaging the insulator if it is solid. The dielectric breakdown of air is familiar to us all as lightning.

The electric field strength across the insulator is simply given by the operating voltage of the superconductor inside divided by the width of the insulator. Maximum operating voltage is when this equals the insulator's dielectric strength.

d = V / (1-f)r
V = d(1-f)r = dr - dfr

The maximum power that can be put through the wire, which I'll call the power capacity, is then simply given by voltage times current. Multiplying the equations for current and voltage together, we get

P = πf2r3Jcd - πf3r3Jcd
P = πr3Jcd(f2 - f3)

If f is zero, there is no superconductor and obviously no current can flow. If f is one, there is no insulator so the superconductor can't be at any voltage and no power can be transmitted. For some f between 0 and 1, f2 - f3 will be at a maximum, and thus the power capacity will be a maximum. This will tell us how much of the cable's overall radius should be superconductor, and how much insulator, for best results. To find this maximum, we differentiate P with respect to f.

DP/Df = 2πr3Jcdf - 3πr3Jcdf2
DP/Df = πr3Jcd(2f - 3f2)

(Capital Ds have been used for differentiation to avoid confusion with the lowercase d used for the dielectric constant)

When DP/Df = 0, P is at a maximum, minimum, or point of inflection.

0 = πr3Jcd(2f - 3f2)
0 = -3f2 + 2f)
0 = 3f2 - 2f

Using the quadratic formula


The solution at 0 is of no interest, P is 0 there. The remaining solution, at f = 2/3, must therefore be where P is at its maximum. There,

V = dr/3
I = 4πr2Jc/9
P = 4πr3Jcd/27

For a worked example, we shall determine the necessary radius of a cable capable of carrying 15 terawatts, the average power consumption of all of humanity, using contemporary to near-future materials.

Jc = 109 Am-2. Or 105 Acm-2. Various groups have reported values of this order of magnitude for different superconductors.
d = 108 Vm-1. The value for Teflon, among the best insulators.
P = 1.5 x 1013 W, as mentioned.


r = 0.069 m

A wire 14 centimetres across could carry the entire world's power consumption; testament to the capabilities of superconductors.

As ever, the equations have consequences, which are in fact what we're mainly interested in. The key factor is that the power capacity of the wire scale with the radius cubed, not squared, because a wider cable not only carries more current but also operates at higher voltage. A corollary of this is that it's better to use one big cable than lots of small ones with the same combined mass.

A second notable consequence is that there are gains to be had not just by improving superconductor technology but also by making better insulators.

A third consequence comes from a bit of knowledge about superconductors. Jc is temperature dependent; at the superconductor's critical temperature, the warmest it can superconduct, it approaches zero, while at lower temperatures it is higher. Thus, even given a much-vaunted room-temperature superconductor, for maximum power transmission it would still require cryogenic cooling.

If the first assumption is broken, you need more insulation to carry a given amount of power. It becomes rather a case of why would you do that?

If the second assumption is violated, and other factors limit the voltage, then for large cables the two-thirds superconductor, one-third insulation ratio will not be held, the insulation can be thinned due to the reduced voltage. Consequently the power capacity will then scale approximately with r squared, not r cubed. The drawback to using multiple smaller cables will be lessened, though not wholly eliminated as they will require more insulation than a single large one.

If the third assumption is violated the situation would considerably change. For this to happen though would require a novel Type-I superconductor with an unprecedently high critical current, or else something entirely unanticipated.

If the fourth assumption is violated we may have to consider power losses in the wire that would create inefficiency, as well as the implications for cooling system of heating in the wire. The exact impact of these factors I am unsure of, though qualitatively I expect larger cables would be required to carry the same current.

For some background information, the Open University has a free e-text on superconductivity. It focuses primarily on Type-I superconductors, which were historically discovered earlier and are better understood but have fewer practical applications than Type-II superconductors.

Tuesday 19 March 2013

On effective range.

Here we shall determine the effective range, meaning the range at which you can hit what you are shooting at, of a weapon in space.

I'm planning on posting updates to the blog weekly, every Sunday night, for as long as I have material to work with. This one is a bit late. If you have any questions you'd like me to tackle, post them in the comments or telegram me on Nationstates.

To business. Our starting assumptions:
  • The weapon's intrinsic range is much greater than its effective range. This is valid for solid projectiles, though may not be for realistic lasers or particle beams which diverge.
  • The weapon shoots one bullet at a time. We aren't considering shotgun-like approaches.
  • The bullets travel in straight lines and don't have any active manoeuvring systems. We aren't considering guided munitions.
  • The weapon is intrinsically precise, it won't miss a non-moving target.
  • The defender can change their acceleration without delay.
  • The attacker and defender both have effectively-instant FTL scanners. This is not uncommon in science-fiction.
The scenario is thus simple. The attacker fires, aiming the bullet so it would strike the middle of the defending ship. The defender immediately detects this and takes evasive action by accelerating. If the defender can completely vacate the space it was occupying when the shot was fired, the shot misses, while if the defender is still in that place then it is a hit. We shall use the reference frame of the defender, prior to their taking the evasive action. The below diagram depicts a miss, the defender having just managed to move half its own length for the shot to pass harmlessly behind it.


With the projectile travelling at velocity v and having to cover distance r, the defender has a time to evade

t = r/v

To determine how far the defender can move within that time, we can use one of the SUVAT equations for uniform acceleration situations to find that distance

s = ut + at2/2

u, the initial velocity, is zero since we chose our reference frame so that would be the case. Therefore, substituting the first equation into the second, we get

s = ar2/2v2

If

s < l/2

Then the projectile will hit. Substituting and rearranging, we get


This I refer to as the "Range Equation". If the target is closer than r, it will be hit. If not, it will be missed.

For a worked example, consider the 1 km long ship capable of 100 g of acceleration from the previous post, being targeted by a projectile moving at 0.95c. Converting to SI units

v = 2.85 x 108 ms-1
l = 1000 m
a = 981 ms-2

r = 2.88 x 108 m

So the target can be reliably hit if it's as far as 288 thousand kilometres away. For context, the average Earth-Moon distance is 385 thousand km.

The range equation has various consequences. Most obviously, smaller targets can get closer to an attacker safely than larger ones. Critically, if two ships are in one-on-one combat, both using weapons that fire their projectiles at the same speed, the smaller one can sit at a range where it will always hit its enemy while never being hit itself. Compactness is thus advantageous, something only emphasised by the previous posts's result that shows a more compact ship will also accelerate harder.

Also, the faster the projectile can be fired the better. Laser shots will of course go at light-speed, and particle beams won't be far off, so massive projectiles need to be doing relativistic speeds or have other advantages in order to compete.

Increasing acceleration while holding size the same, for example by advancing technology, has a limited benefit compared to reducing size and thus benefiting from the natural consequent acceleration increase.

Finally we can tell that the intrinsic precision needs to be pretty precise, to well under a second of arc. The modern Hubble Space Telescope is capable of pointing with a precision of 0.01 arc seconds, so this is not an especially demanding requirement.

If the first assumption is violated, and range is actually limited by the intrinsic behaviour of the weapon, the argument becomes moot. A consequence of such a situation is that large ships can now get just as close to their attacker as smaller ones. It's also possible for the range equation to apply to small targets but not to large ones, creating a two-regime situation. This would likely result in the 'borderline' area being vacated as ships would either be larger for more general capability, or smaller for being hard to hit.

If the second assumption is violated, then the geometry needs to be considered in more detail. The overall conclusion that smaller ships are harder to hit - which is the intuitive result after all - will probably remain intact though.

If the third assumption is violated the argument becomes moot. Guided munitions are likely to have much longer effective ranges. However they may have other drawbacks.

If the fourth assumption is violated then the imprecise pointing will result in probabilistic concerns, but the overall effect is likely to be similar to limited intrinsic range.

If the fifth assumption is violated, then a 'reaction time' delay must be incorporated into the equations, making them more complicated.

If the sixth assumption is violated, then the defender of course cannot actively dodge the incoming fire if it's travelling at or near light-speed. However, since the attacker's information will always be out of date, the defender can make random manoeuvres. If they are within a range half that given by the above equation, they will surely be hit (the derivation of this will come in a future post). Beyond that, the situation becomes probabilistic instead of a certain miss.

Sunday 10 March 2013

On maximum acceleration.

Here we shall determine the maximum possible acceleration of a science-fiction spacecraft.

A few starting assumptions shall be made.
  • The spacecraft is accelerated by reaction thrusters at its stern.
  • The reaction thrusters can be built to produce as much thrust as desired. We shall not worry about exactly how they work, just treat them as black boxes.
  • The spacecraft can be approximated as a simple prism of material with a certain compressive strength.
Since the thrusters can be as forceful as we like, the limit on spacecraft acceleration comes from its structure. If the thrusters push too hard, they will crush the spacecraft they are supposed to be pushing.

A spacecraft in deep space subject to a force giving it an acceleration a is an analogous situation to a spacecraft sitting bow-skywards on a planet with surface gravity g=a.


As such, the maximum height of a column you could build on that planet is equal to the maximum length of a spacecraft you could build to withstand that acceleration. A taller column will fail by crushing. This height can be derived as follows.

σc is the material compressive strength
ρ is the material density
a is acceleration
h is column height
A is column cross-sectional area

The mass of the column is given by

m = Ahρ

And the pressure at its base by

σ = ma/A = hρa

The cross-sectional area unsurprisingly cancelling out.

Obviously the pressure at the base of the column is greater than at any higher point. If this pressure is less than the material compressive strength, the column - or the spacecraft - will not fail by crushing. It may still fail by buckling, but buckling requires lateral deflection. As such, I feel it can be prevented by an actively-controlled restoring force to counter the deflection before it reaches failure, or by just not making the spacecraft too slender.

Rearranging to give acceleration as a function of the other variables,

a < σc/ρh

This, then, is the maximum possible acceleration of a science-fiction spacecraft.

A real spacecraft will not be a homogenous block of material. To treat it as such, we can calculate its average compressive strength and density as follows.

f is the volume fraction of the spacecraft that is structure
ρs is the density of the structure material
σcs is the compressive strength of the structure material
ρf is the mean density of the functional parts of the spacecraft, ie everything but its structure
The compressive strength of the functional parts is assumed to be negligible

The average compressive strength and density of the whole spacecraft are given then by

σc = fσcs
ρ = fρs + (1-f)ρf

For a worked example, consider a spacecraft with the following properties

f = 0.1, ie 10% of the craft is its structure
ρs = 3500 kg m-3, ie diamond
σcs = 12 GPa, again diamond
ρf = 1000 kgm-3, same as water, just feels like a good value
h = 1 km, feels like a good size for a fairly large spacecraft

Then
σc = 1.2 GPa
ρ = 1250
a < 960 ms-1

So the maximum acceleration is about a hundred Earth gravities.

If the shape is a pyramid or cone rather than a prism, the exact formula may change, I am uncertain on what to, but the general form will remain the same.

This limit implies various consequences. Most obviously, smaller spacecraft are capable of greater accelerations than larger ones, as most would intuitively expect. Also, to obtain maximum acceleration with a given spacecraft volume, a relatively flat spacecraft, for example a classic flying saucer, will be superior to the slender designs often seen in science fiction.

If the first assumption is violated, the entire argument can cease to hold. There  is unlikely to be much benefit from trying to place engines along the flanks of the spacecraft, material failure will still occur but in shear rather than compression. However reactionless drives that create a field acting on the entire bulk of the spacecraft, or on the space it sits in, will completely nullify the equations here. With such reactionless drives, small and large craft might accelerate equally, or large ones could even accelerate harder. Shape may not factor into acceleration performance, allowing it to be determined by other considerations.

If the second assumption is violated, the specific equations become invalid, but the generic square-cube law still indicates smaller spacecraft can probably accelerate harder than larger ones, and a flat craft still has more space on its surface for engines.

If the third assumption is violated, the entire argument again may cease to hold. One way to accomplish this is for the spacecraft to not rely solely on a physical structure for its strength, but to use dynamic support methods such as some sort of forcefield to transfer the thrust from the engines forward to the bows. Such technology may reverse the situation and make long slender spacecraft actually advantageous