Enzima anticontaminacion o come PET

Los científicos en Gran Bretaña y los Estados Unidos dicen que han diseñado una enzima que come plástico, un avance que podría ayudar en la lucha contra la contaminación.
La enzima (1) es capaz de digerir tereftalato de polietileno(PET). El PET se vuelve viscoso por encima del 70 ° C. Su punto de fusión está por encima de 250C.

Los científicos han creado, de forma accidental, una enzima mutante que descompone las botellas de bebidas plásticas. Este descubrimiento podría ayudar a resolver la crisis mundial de contaminación por plástico al permitir por primera vez el reciclaje completo de botellas.

La nueva investigación fue impulsada por el descubrimiento en 2016 de la primera bacteria que había evolucionado naturalmente «para comer plástico», en un basurero en Japón. Los científicos ahora han revelado la estructura detallada de la enzima crucial.

La enzima mutante tarda unos días en comenzar a descomponer el plástico, mucho más rápido que los siglos que lleva en los océanos. Pero los investigadores son optimistas de que esto se puede acelerar aún más y convertirse en un proceso viable a gran escala.

Aproximadamente 1 millón de botellas de plástico se venden por minuto en todo el mundo y solo el 14% es reciclado. Muchos plásticos terminan en los océanos donde han contaminado incluso las partes más remotas, dañando la vida marina y potencialmente a las personas que comen mariscos.

Actualmente, aquellas botellas que se reciclan solo pueden convertirse en fibras opacas para la ropa o alfombras. La nueva enzima puede permitir reciclar las botellas de plástico transparente en botellas de plástico transparente, lo que podría reducir la necesidad de producir plástico nuevo.

La estructura de la enzima era muy similar a la desarrollada por muchas bacterias para descomponer la cutina, un polímero natural utilizado como recubrimiento protector por las plantas. Pero cuando el equipo manipuló la enzima para explorar esta conexión, mejoraron accidentalmente su capacidad de comer PET.

Las enzimas industriales son ampliamente utilizadas en, por ejemplo, polvos de lavado y producción de biocombustible. Han sido hechas para trabajar hasta 1,000 veces más rápido en pocos años, la misma escala de tiempo que McGeehan prevé para la enzima que come plástico.

Una posible mejora que se explora es trasplantar la enzima mutante a una «bacteria extremófila» que puede sobrevivir a temperaturas superiores a 70 ° C, momento en el que el PET cambia de un estado vítreo a uno viscoso, lo que hace que se degrade 10-100 veces más rápido.

Trabajos anteriores habían demostrado que algunos hongos pueden descomponer el plástico PET, que representa aproximadamente el 20% de la producción mundial de plástico. Pero las bacterias son mucho más fáciles de aprovechar para usos industriales.

«Las enzimas son no tóxicas, biodegradables y pueden ser producidas en grandes cantidades por microorganismos».

Hasta siempre.
CTsT=CVP

Fuente usada:
* https://www.theguardian.com/environment/2018/apr/16/scientists-accidentally-create-mutant-enzyme-that-eats-plastic-bottles

 

Anuncio publicitario

IBM Research hace la película más pequeña del mundo utilizando átomos

Científicos de IBM presentaron la película más pequeña del mundo, hecha con uno de los elementos más diminutos del universo: los átomos. Bajo el título “A Boy and His Atom” (Un niño y su átomo), la película, verificada por Guinness, se hizo con miles de átomos posicionados con precisión para crear casi 250 cuadros de acción con la técnica stop-motion (filmación cuadro por cuadro).


“Un niño y su átomo” presenta un personaje llamado Atom, que se hace amigo de un átomo individual y sale a dar un paseo divertido, durante el cual baila, juega a atrapar la pelota y salta en una cama elástica. Con una banda musical alegre de fondo, la película constituye una forma singular de transmitir la ciencia fuera de la comunidad de investigación.

Movilizar los átomos es una cosa; se puede hacerlo agitando la mano. Captar, posicionar y ajustar átomos para crear una película original a nivel atómico es una ciencia precisa y totalmente novedosa”, señaló Andreas Heinrich, investigador principal de IBM Research. “En IBM, los investigadores no sólo leemos ciencia; la hacemos. Esta película es una forma divertida de compartir el mundo a escala atómica y mostrarle a la gente común los desafíos y la diversión que la ciencia puede crear”.



Haciendo la película
Para hacer la película, se movieron los átomos con un microscopio de efecto túnel de barrido inventado por IBM.
Esta herramienta, que recibió un Premio Nobel, fue el primer dispositivo que permitió a los científicos visualizar el mundo al nivel del átomo individual”, explicó Christopher Lutz, científico investigador de IBM Research.
Pesa dos toneladas, funciona a una temperatura de 268 grados centígrados negativos y magnifica la imagen de la superficie atómica más de 100 millones de veces. La capacidad de controlar la temperatura, presión y vibraciones a niveles exactos hace que nuestro laboratorio de IBM Research sea uno de los pocos lugares del mundo en el que los átomos pueden ser movilizados con tanta precisión”, agregó.

Mediante la operación remota desde una computadora estándar, los investigadores de IBM utilizaron el microscopio para controlar una aguja ultra-fina sobre una superficie de cobre para “sentir” los átomos.
A tan sólo un nanómetro de la superficie, que es de mil millonésimas de metro en distancia, la aguja puede atraer físicamente átomos y moléculas de la superficie y llevarlos a una ubicación especificada con precisión sobre la superficie. El átomo en movimiento produce un sonido singular, que constituye la señal crítica que permite identificar cuántas posiciones realmente se ha movido.
Durante la creación de la película, los científicos renderizaron imágenes en planos fijos de los átomos individualmente colocados. El resultado fue 242 cuadros individuales.

La necesidad de comprimir Big Data
El desarrollo de la película más pequeña del mundo no es un terreno totalmente nuevo para IBM. Durante décadas, los científicos de IBM Research han estudiado materiales a nanoescala para explorar los límites del almacenamiento de datos, entre otras cosas.
A medida que los circuitos de computación siguen achicándose hacia dimensiones atómicas –tal como lo han hecho durante décadas, según la Ley de Moore – los diseñadores de chips se enfrentan con limitaciones físicas para el uso de las técnicas tradicionales.

La exploración de métodos no convencionales de magnetismo y las propiedades de los átomos en superficies bien controladas permite a los científicos de IBM identificar caminos totalmente novedosos para la computación.
Utilizando el objeto más pequeño disponible para la ingeniería de dispositivos de almacenamiento de datos – los átomos individuales – el mismo equipo de investigadores de IBM que desarrolló esta película también creó recientemente el bit magnético más pequeño del mundo. Fueron los primeros en responder a la pregunta de cuántos átomos se necesitan para almacenar en forma confiable un bit de información magnética: 12.


En comparación, se necesitan aproximadamente 1 millón de átomos para almacenar un bit de datos en una computadora o dispositivo electrónico moderno. Si fuera comercializada, esta memoria atómica algún día podría llegar a almacenar todas las películas que se hicieron en la historia del cine, en un dispositivo del tamaño de una uña.

La investigación implica formular interrogantes que van más allá de lo que se necesita para encontrar buenas soluciones de ingeniería de corto plazo a los problemas. Conforme la creación y el consumo de datos crecen, el almacenamiento de datos debe hacerse más pequeño, hasta llegar al nivel del átomo”, continuó Heinrich. “En esta película, estamos aplicando las mismas técnicas utilizadas para desarrollar nuevas arquitecturas de cómputo y formas alternativas de almacenar datos”.


 * 07 de May de 2013, EbizLatam

China desbanca a EEUU como el mayor mercado de ordenadores

China se convirtió el pasado año en el mayor mercado mundial de ordenadores personales relegando a EEUU al segundo lugar, según un estudio divulgado por la empresa IHS iSuppli.

La red comercial china recibió el pasado año 69 millones de ordenadores personales, frente a 66 millones de unidades en EEUU. Expertos de IHS iSuppli señalan, además, que las zonas rurales de China representan un vasto mercado por descubrir.
Otra particularidad, también atribuible en parte a la demanda de amplias zonas rurales, es que el suministro de ordenadores portátiles en China iguala al de los equipos de mesa mientras que en el resto del mundo la relación es de 65 a 35.
* Moscú, 30 de abril, RIA Novosti.

Robotics: Top Prosthetic Limbs Bring Hope to Amputees

In the aftermath of the Boston Marathon bombing, the photo of Jeff Bauman Jr. being rushed to the hospital shortly after having his legs blown off brought us face to face with the grim reality that many victims of this tragedy would be undergoing limb amputation.
With the Modular Prosthetic Limb, researchers from Johns Hopkins University Applied Physics Lab have successfully demonstrated the possibilities of controlling artificial limbs simply by thought. 

JOHNS HOPKINS UNIVERSITY APPLIED PHYSICS LAB

But advances in prosthetic technologies over the last thirty years have far surpassed the crude, wooden models that once made having artificial limbs such a nuisance. In fact, today’s robotic and bionic devices are giving amputees nearly full restoration of their lost limbs.
Hugh Herr, head of the biomechatronics research group at MIT Media Lab and double leg amputee, says he predicts “bionics will catch on like wildfire.”
“It’s a win for the patient. It’s a win for the healthcare supplier and it’s a win for the payer,” Herr told Discovery News. “Right now the payers think that high tech is expensive and should be avoided. I’m trying to change that paradigm.”
While bionic prosthetics are more expensive on a device-by-device basis, Herr says they can help reduce secondary disabilities such as hip arthritis, knee arthritis and lower back pain that amputees often develop from using prosthetic limbs.
“Those secondary disabilities are what drive up health care costs,” he said. “If you can emulate nature, if you can truly replace a limb after amputation, those secondary disabilities will never emerge and that person will remain healthy for their entire life and won’t have these astronomically high health care payouts.”
Foot and ankle prosthetics have come a long since the SACH, the Solid Ankle, Cushion Heel, developed in the mid-1950s.

WILLOW WOOD

Herr says there have been three eras in prosthetic limb technology. First was the SACH foot era, which stands for Solid Ankle, Cushion Heel. Developed in the mid-1950s, the foot typically had a wood core, with a foam and rubber outer shell. While the artificial foot gave patients more stability, it offered little lateral movement.
“Foot-ankle prosthesis you could characterize as related to energy return during the push-off phase,” Herr said. “The SACH foot basically stores little to no energy, nor returns little to no energy,”
New models of the SACH foot with titanium cores are still used today, but are only recommended for patients with a low activity level.

The carbon graphite technology in the Flex Foot essentially puts a spring in the wearers step.

 

JULIAN FINNEY/GETTY IMAGES

For hundreds of years, prosthetic feet that were fundamentally similar to the SACH foot were widely used. During the 1980s, a new era began, when Herr says the Flex Foot carbon design changed that paradigm.
Developed by Van Phillips in 1984, Flex Foot’s carbon graphite technology essentially put a spring in the wearers step. By storing the kinetic energy of each step, the artificial foot allowed amputees to jump, walk and run at speeds of up to 28 feet per second. The Flex-Foot Cheetah blade is the technology’s most high performance model and primarily used for people with below-knee amputations. Though the cheetah blades allow wearers to run like the wind, Herr says carbon prosthetics still don’t provide a normal level of energy return.

The BiOm Ankle System is the first bionic ankle-foot device commercially available for lower-extremity amputees.

 

IWALK INC.

The third era is robotics or bionics, in which there’s an energy source and an actuator in the artificial limb that can produce energies greater than what a spring can produce. “For legs, we’ve just entered that era,” said Herr, whose BiOm Ankle System is a leader in the field.
As the first bionic ankle-foot device commercially available for lower-extremity amputees, the BiOm Ankle System reestablishes the biomechanics of the ankle-foot function across all walking speeds. The system’s comprehensive design emulates the muscles and tendons of the human ankle joint and puts forth more mechanical energy than it absorbs. This allows amputees to walk with a more natural gait at their own chosen speed, using the same amount of energy as a non-amputee.
Using three computers and six sensors BiOM’s processors are able to adjust stiffness in the ankle, spring equilibrium and propulsive torque 500 times a second. When an increase in torque is detected in the ankle joint, an actuator helps trigger more torque to modulate the foot’s push-off power, even at different velocities and inclines.

Icelandic company Ossur introduced their Symbionic Leg in 2011 as the world’s first complete bionic leg.

 

OSSUR

Ossur, the Icelandic company behind the cheetah blades, introduced their Symbionic Leg in 2011 as the world’s first complete bionic leg. It’s a combination of Ossur’s Rheo Knee and Proprio Foot.
Integrated sensors in the polyurethane knee monitor the weight, motion and force and onboard microcontrollers process that data while tracking gait patterns. An actuator interprets that data and initiates appropriate resistance in the knee joint whether a person is standing still, turning a corner or walking in a straight line.
The foot design is based on Flex-Foot technology and incorporates lightweight, durable carbon fiber packaged with Terrain Logic, an onboard artificial intelligence system that calculates sensor data and feeds it to an actuator, which then relays motion instructions to precision motors.
Touch Bionics’s iLimbs use muscle sensors placed on the skin of an amputees and just announced the first upper limb prosthesis controlled by a smartphone app. 

TOUCH BIONICS

Developed by Touch Bionics, iLimb uses muscle sensors placed on the skin of an amputees remaining stump. The electric signals generated by the wearer’s muscles control an onboard processor that’s embedded into the prosthetic hand. This myoelectric technology gives amputees a more precise range of control and movement, even giving them the ability to pick up coins.

BLOG: App Controls Bionic Hand

Most recently, Touch Bionics announced their iLimb Ultra Revolution will be the first upper limb prosthesis controlled by a smartphone app. The bionic hand features four individually articulating fingers and a rotating thumb that can either be controlled by the wearer’s muscle signals or the new Quick Grip app system that automatically forms the hand into preset grip patterns. By tapping the app, users can access 24 pre-programmed motions that assist with picking up objects, grasping tools or shoe-tying, to name just three.
DARPA helped develop two anthropomorphic modular prototype prosthetic arms, both of which offer increased range of motion, dexterity and control options. 

DEKA RESEARCH

When DARPA launched its $150 million Revolutionizing Prosthetics program in 2006, upper-limb prosthetic technology was lagging far behind lower-limb technology. Out of that program two anthropomorphic modular prototype prosthetic arms have emerged, both of which offer increased range of motion, dexterity and control options. «From that program there’s a number of technologies that have not been commercialized yet that I hope will be commercialized in the future,» said Herr. «One, of course, is Dean Kamen’s ‘Luke’ Arm.»
Nicknamed after the prosthetic worn by Luke Skywalker in «Star Wars,» Kamen’s DEKA Arm uses foot controls that work simultaneously with sensors in the device’s sockets. Wearers use the foot controls like a joystick to access the mechanical arm’s range of motion. The system even provides feedback via sensors worn on the amputee’s remaining part of the limb that let wearers know how hard they are grasping an object.
With a Modular Prosthetic Limb, nerves that previously went to a patient’s hand were re-routed to healthy muscles in the amputee’s stump.

 

JOHNS HOPKINS UNIVERSITY APPLIED PHYSICS LAB

With the Modular Prosthetic Limb, Johns Hopkins University Applied Physics Lab has successfully demonstrated the possibilities of controlling artificial limbs simply by thought. Led by Michael McLoughlin and Albert Chi, the APL’s work compromises the second prototype of DARPA’s Revolutionizing Prosthetics program.
In testing the prosthetic arm, nerves that previously went to a patient’s hand were re-routed to healthy muscles in the amputees stump. Sensors on the skin picked up brain signals from those nerves, then translated those signals to the robotic arm.
For quadriplegics like Jan Sherman, who was recently featured on 60 Minutes, simply re-routing nerves is not an option. Under DARPA’s program, she had University of Pittsburgh neurosurgeon Elizabeth Tyler-Kabara implant two sensors about the size of a pea on her brain. The sensors were wired to two computer connection pedestals that stuck out on the top of her head. When «plugged in,» Sherman was able to move a robotic arm and hand merely with her thoughts.

The 9 coolest mobile hydraulic cylinder applications

Hydraulic cylinders are used in countless applications that demand power density and flexibility. Here are 9 of the coolest (not to mention some of the biggest!) applications for mobile hydraulic cylinders.

1. Terex Roadmaster crane


terex
The Terex Roadmaster crane provides a lifting capacity of 7 tons at a 20-meter radius with a fully extended boom and features a single cylinder telescopic system.

2. Inbye mining roof support


inbye
Underground mines are susceptible to collapse, which is a pretty bad thing. But mining roof support systems, like this one developed by Inbye of Australia, use some major hydraulic cylinder action to keep miners safe and sound.

3. Liebherr mining truck


liebherr
Speaking of mining, what do you do with all that material you dig up? The mammoth Liebherr T284 mining truck employs two double-stage, double-acting hoist cylinders with inter-stage and end cushioning in both directions for its dump system.

4. Piling barge


hunger
You want to see a massive hydraulic cylinder? Look no further than Hunger Hydraulik’s behemoth on this 93-meter piling barge!

5. Caterpillar hydraulic excavator


caterpillar
Caterpillar’s 390D L Hydraulic Excavator features a load-sensing system to ensure high efficiency and productivity with minimal losses. The boom cylinder has a 8.27-in. bore and 77.44-in stroke, while the stick cylinder has a 8.66-in. bore and 89.05 in. stroke. Maximum system pressure is 5000 psi.

6. Sinomach road paver


sinomach1
Sinomach’s TWL4500 road pavers feature a hydraulic telescopic mangle, and the width of the hydraulic extension screed is infinitely adjustable.

7. Plustech forest wood excavator


Plustech
This fascinating forest wood excavator—which looks like a piece of equipment straight out of Avatar—was developed almost a decade ago in Finland by Plustech Oy, now a part of John Deere’s Construction and Forestry Div.

8. Ditch Witch directional drill


ditchwitch
Rock drilling isn’t exactly easy work. But with equipment like the Ditch Witch JT100 all-terrain directional drill, it’s a relative breeze. Can we get one of these for our backyard?

9. New Holland combine


newholland
The New Holland CX8000 Series Super Conventional Combines use a hydraulic cylinder to pivot the head, allowing users to follow uneven terrain.
Posted by + on Friday, April 12, 2013

Rex, el "hombre biónico" de un millón de dólares

Rex es un robot que tiene órganos artificiales, sangre sintética y extremidades robóticas que han sido hechas con lo último de la tecnología.

Londres (EFE). El Museo de la Ciencia de Londres exhibe desde mañana a Rex, el primer “hombre completamente biónico” con órganos artificiales, sangre sintética y extremidades robóticas que ha costado 640.000 libras (más de un millón de dólares).

Ideado, diseñado y montado por un grupo de expertos en robótica, Rex, de dos metros de altura, tiene bastante en común con Steve Austin, el hombre biónico que retrató la serie de televisión de los años 70 “El hombre de los seis millones de dólares”.

Con un rostro que le aporta humanidad, el hombre biónico incorpora algunos de los últimos avances de tecnología protésica, así como páncreas, riñón, bazo y tráquea artificiales, y un sistema circulatorio funcional.

Uno de los expertos que ha participado en su construcción, Richard Walker, señaló a la cadena BBC que el resultado del trabajo es “muy significativo”, pues ha permitido saber “lo cerca que las tecnologías protésicas están de reconstruir el cuerpo humano”.

LA TECNOLOGÍA BIÓNICA
Rex protagoniza mañana un documental de la cadena Channel 4, “Cómo construir un hombre biónico”, en el que también participa el psicólogo social suizo Bertolt Meyer, que nació sin mano derecha y lleva una prótesis biónica de 30.000 libras (casi 47 mil dólares).

“Algunas partes que hemos utilizado ya las llevan algunas personas que pueden vivir gracias a ellas. Las retinas artificiales permiten a la gente ver de nuevo. Hemos combinado estos avances con lo último en robótica”, agregó Walker.

En el documental se verá cómo Meyer prueba extremidades protésicas modulares mucho más avanzadas que la suya.


“He seguido las nuevas tecnologías biónicas durante mucho tiempo y creo que hasta hace cinco o seis años no pasaba gran cosa. Y ahora, de repente, estamos en un punto donde podemos construir un cuerpo que es magnífico y bonito de una manera propia y especial”, manifestó el psicólogo.

50 Years of CAD

In January, 1963, Ivan Sutherland, a PhD candidate at MIT, submitted his thesis, titled “Sketchpad: a man-machine graphical communication system,” describing his work in creating what is now recognized as one of the very first interactive CAD systems.
Sketchpad ran on MIT Lincoln Labs’ TX-2 computer. It was, at the time, one of the biggest machines in the world, with 306 kilobytes of core memory. It differed from most contemporary computers, in that it was designed to test human-computer interaction. In addition to the standard complement of I/O devices, the TX-2 had programmable buttons for entering commands, an oscilloscope/video display screen (addressable to 1024×1024 pixels), a light pen for input, and a pen plotter for output. It was, in a way, the first personal computer, albeit one that took up an entire building.
Sutherland-at-TX-2

Ivan Sutherland on MIT Lincoln Labs’ TX-2 computer.
Unlike earlier computer applications, which were batch oriented, Sketchpad was interactive. Using the light pen and input buttons, you could draw directly on the screen, using a crosshair cursor. The program supported points, line segments, and arcs as basic elements, but allowed these to be saved into master drawings, which could be copied or instanced. This facility was used to create alphanumeric character glyphs, and electrical schematic symbols.
One thing that made Sketchpad really stand out was its constraint management subsystem. It not only supported explicit constraints, added to entities after they were drawn, it supported implicit constraints, created as entities were drawn. For example, if you started to draw a line, and brought the cursor close to the endpoint of another line, it would snap to that endpoint. And it would remember that the two lines were connected. If, while editing, you moved one line, the other line would move with it.
Sketchpad included 17 different types of constraints, including vertical, horizontal, perpendicular, coincident, parallel, aligned, equal size, and more. These native (or “atomic”) constraints could be combined, to create more complex relationships. Sketchpad even allowed the visual display of constraints on screen, using icons (symbols) to represent each type.
With the constraint system, it was possible to loosely sketch a shape, then add geometric and topological relationships to modify it into the exact shape you needed. It was even possible to use constraints to do structural analysis of lattice trusses, such as might be found on cantilever and arch bridges.
bridges-diagram
Visually, Sketchpad was surprisingly interactive. It supported rubberbanding when drawing or editing entities (so the entities would stretch as you moved the cursor.) It supported dynamic move, rotate, and scale of entities (meaning that they moved, rotated, and scaled as you moved the cursor.) It not only supported zoom and pan (dynamically, of course), but did so transparently—even when you were in the midst of another drawing or editing operation.
Sketchpad was designed to be extensible, with provision for adding both new graphical element types, and new constraint types. Shortly after Sutherland submitted his Sketchpad thesis, Timothy E Johnson submitted his Masters thesis describing Sketchpad III, a 3D version of the program. About the same time, Lawrence G. Roberts submitted his PhD thesis, where he had added support to Sketchpad for 3D solids, including assemblies and real-time hidden line removal.
Timothy-Johnson-Sketch

Timothy E. Johnson
While it’s likely that Sketchpad would have gotten plenty of attention on its own, Sutherland, Johnson, and Roberts each made 16 mm movies, demonstrating their work. A combination of these films was used in a 30-minute program in 1964 for Boston TV station WBGH. (A film that appears to be an edited version of this is on YouTube. Just search for “Ivan Sutherland.”) Further, both Sutherland and Johnson presented papers on their work at the 1963 Spring Joint Computer Conference.
Sketchpad pioneered some of the most important concepts in computing, including the graphical user interface, non-procedural programming, and object-oriented programming. If you use a computer or smart phone, you’re using technology pioneered by Sketchpad.
Sutherland didn’t rest on his laurels after Sketchpad. He went on to run ARPA (the predecessor of DARPA.) He co-created the first virtual reality and augmented reality head-mounted display. He co-founded Evans and Sutherland, where he did pioneering work in the field of real-time hardware, accelerated 3D graphics, and printer languages. He was a Fellow and Vice President at Sun Microsystems. He taught at Harvard, University of Utah, and Caltech. Now, at the age of 74, he is heading up research in asynchronous computing at Portland State University.
sketchpad
circuits diagram
As a result of his work on Sketchpad, and his many subsequent contributions to computing, Sutherland has received a dazzling array of honors, including the National Academy of Engineering First Zworykin Award, the IEEE Emanuel R. Piore Award, the ACM Steven A. Coons Award, the ACM Turing Award, the IEEE John von Neumann Medal, and, most recently, the Kyoto Prize.
Alan Kay, himself a recipient of many honors for his pioneering work in computing, has described Sketchpad as “the most significant thesis ever done.” At one point, he asked Sutherland, “How could you possibly have done the first interactive graphics program, the first non-procedural programming language, and the first object-oriented software system, all in one year?” Sutherland’s response was “well, I didn’t know it was hard.”
What about CAD?
As easy as it is to trace the lines of influence from Sketchpad directly to Apple and Microsoft, it’s a little harder to trace the lines of influence from Sketchpad to today’s modern CAD systems. Mostly because those lines are so pervasive.
Anyone who went from MIT into the CAD industry in the 1960s or 1970s—and there were many people who did—was influenced by Sketchpad. Even Jon Hirschtick, a mid-1980s MIT graduate who went on to found SolidWorks, was influenced by Sketchpad.
Despite Sketchpad’s significance, no modern CAD systems actually trace their roots back to Sketchpad. There are a few good reasons for this: First, Sketchpad was a proof-of-concept program for human-machine interaction. Sutherland never intended it to be the basis of a commercial product. Second, Sketchpad was designed to run on the TX-2, a non-commercial research computer. It would have been difficult to port it to a commercial computer (and it’s questionable whether there were any commercial computers at the time that had sufficient capacity to run Sketchpad.)
The high costs of computing, and the lack of sufficiently good graphics display hardware made commercializing Sketchpad a practical impossibility. It wouldn’t be until 1969 that Applicon and Computervision were able to begin delivering commercial CAD systems that could actually produce drawings economically.
The deeper story
What I’ve written so far about Sketchpad could be found in Wikipedia, or in most simple histories of the CAD industry. But there is a deeper story. It starts with this observation: Sutherland never called Sketchpad a computer-aided design system. This, despite the fact that, among those supervising his work on Sketchpad were the very people who had coined the term, and defined the requirements, for Computer-Aided Design.
In December, 1959, The Mechanical Engineering Department and Electronic Systems Laboratory of the Electrical Engineering Department of MIT entered into a joint project, sponsored by the US Air Force, to explore the possibilities for something they called “Computer-Aided Design.”
The next year, in October, 1960, Douglas Ross, head of the Electronic Systems Laboratory’s Computer Application Group, published a technical memorandum titled “Computer-Aided Design: A Statement of Objectives,” laying out his vision. A month later, Steven Coons and Robert Mann, of the Mechanical Engineering Department’s Design and Graphics Group published a complementary memorandum, titled “Computer-Aided Design Related to the Engineering Design Process,” laying out their philosophy of approach. While each group had a somewhat different philosophy, their common goal was to evolve a man-machine system which would permit a human designer to work together on creative design problems.
At the time, MIT was uniquely qualified to take on this research project. They had the TX-0, a research computer that was optimized for exploring human-machine interaction, and, located at MIT Lincoln Laboratory was the TX-2, an even bigger research computer.
During the winter of 1960-61, Ivan Sutherland spent some time working on the TX-0, using its display and light pen. He got the idea that the application of computers to making line drawings would make an interesting PhD thesis subject. In the fall of 1961, Professor Claude Shannon signed on to supervise Sutherland’s computer drawing thesis. Among others on his thesis committee were Marvin Minsky and Steven Coons.
Though Sutherland was not a part of the MIT Computer-Aided Design Project, he was given tremendous support. Wesley Clark, then in charge of computer applications at Lincoln Laboratory, agreed to give him access to the TX-2. By November, 1961, Sutherland had the first version of Sketchpad working. This version, based on an internal project memorandum authored by Coons, could draw horizontal and vertical lines, and supported zooming of the display. In his thesis, Sutherland said “this early effort in effect provided the T-square and triangle capabilities of conventional drafting.” It was definitely more of a computer-aided drafting system than a computer-aided design system.
The version of Sketchpad described in Sutherland’s thesis was quite a bit more advanced than that first version. Based on a suggestion from Shannon, it supported both line segments and arcs. Sutherland also incorporated concepts developed by members of the Computer-Aided Design Project, including plex programming (a precursor to modern object-oriented programming), the Algorithmic Theory of Language, the Theory of Operators, and the Bootstrap Picture Language. This version of Sketchpad also included a constraint solver developed by Lawrence Roberts.
Sutherland gave a presentation on Sketchpad at the 1963 Spring Joint Computer Conference. Also speaking there were Coons, whose presentation was titled “An outline of the requirements for a computer-aided design system,” Ross and Jorge Rodriquez, who presented “Theoretical foundations for the computer-aided design system,” and Robert Stotz, who presented on “Man-machine console facilities for computer-aided design.”
Sutherland, like many other people who have accomplished great things, stood on the shoulders of giants. Clark had designed the TX-2, a computer perfectly suited to creating an interactive drawing program. Engineers at Lincoln Laboratory had optimized the design of light pens. Shannon had created information theory. Roberts had contributed solver technology. But it was Ross and Coons who provided Sutherland with many of the conceptual underpinnings that helped make Sketchpad really stand out.
Even though Sutherland wasn’t a member of the MIT CAD Project, Ross and Coons were happy to support and promote his work. They had a much larger vision for Computer-Aided Design, but Sketchpad was an excellent proof of concept, and reflected well on them.
Ross, writing in 1967, said “Sutherland’s skill, inventiveness, and diligence in expressing these powerful concepts in a smoothly functioning system, making maximum use of the powerful features TX-2 Computer, enabled Sketchpad to bring to life for many people the vast potential for computer-aided design. In particular, the widely distributed movies of Sketchpad in operation have had a profound influence on the whole field of computer graphics.”
The lessons of Sketchpad
Sutherland never wanted to create a computer-aided design system. He wanted to create a computer drawing system. That such a system could be used for drafting, or as a tool for engineering design was of secondary importance to him.
Sutherland, in his paper Technology and Courage, said “Without the fun, none of us would go on!” In Sketchpad, he went as far as he could with computer drawing software while still having fun. Taking it further would have been more like work than fun (as many CAD developers have discovered over the last 50 years.) In the process of creating Sketchpad, Sutherland discovered that the most challenging impediment to making such a system practical was in the performance of its display system. In 1968, he co-founded Evans and Sutherland, and tackled that problem.
Sutherland created two versions of Sketchpad: one that did drafting, and one that did design. Even today, people who see the movies of the design version of Sketchpad are blown away by its capability. Yet, what capabilities do they look for when they go to buy 2D CAD software? Drafting.
Over time, a number of companies have developed Sketchpad-like 2D design programs featuring constrained sketching. They’ve mostly failed in the market. At the same time, AutoCAD, a simple 2D drafting program grew to become the world’s most popular CAD program. It only got constraint capabilities in 2010—some 47 years after Sketchpad had them.
The place where Sketchpad-like capability has found acceptance is in 3D feature-based modeling. The sketching modules for programs such as Pro/E and SolidWorks are very much like Sketchpad. At least, in capability. Where they fail in comparison to Sketchpad is in extensibility.
Possibly the most valuable lesson from Sketchpad may be taken from the observation that Sutherland actually built two versions of the software. When he found that the first version couldn’t be easily modified to do what was required, he started over, and built on a clean—and carefully designed—software architecture.

If you’d like to learn more about the history of CAD, including Sketchpad and the MIT CAD Project, visitwww.designworldonline.com/cadhistory.
There, you’ll find copies of the original documents that launched the CAD industry, including, for the first time since they were published in 1961, downloadable versions of the original MIT technical memos on Computer-Aided Design by Doug Ross, Steven Coons, and Robert Mann. You’ll also find a free downloadable version of David Weisberg’s 667 page authoritative history of the CAD industry, The Engineering Design Revolution.

Supersonic Ping-Pong gun fires balls at Mach 1.2

Few things capture the attention of physics students like a gun that fires Ping-Pong balls, according to a mechanical engineer who just built one that accelerates the balls to supersonic speeds. 

“You can shoot Ping-Pong balls through pop cans and it is great, it is so captivating, it is so compelling that you can get kids’ attention and once you’ve got their attention, you can teach them something,” Mark French, the Purdue University assistant professor who built the gun, told NBC News.
The guns typically work by sealing a slightly-larger-than Ping-Pong-ball-diameter tube with packaging tape and sucking all the air out to create a vacuum. Once the seal is broken on one end, air rushes into the tube and pushes the ball down the barrel.
“The ball doesn’t fit tightly in the tube, a little bit of air gets past the ball and when it gets to the seal at the other end, that little puff of air gets compressed and blows the seal out of the way so the ball can come out at 600 or 700 feet a second,” French explained.
After getting tons of mileage in class over the past few years — as well as a divot or two in his classroom wall — with a gun he built based on a design in scientific journal, he started thinking that he could make the ball come out even faster. And, well, faster is better.
The trick, he figured, was to get the air that pushes the ball out the tube (gun barrel) to move faster. To do this, French borrowed a nozzle design with a pinch in the middle that aerospace engineers use to get air moving at supersonic speeds in their wind tunnels.
As air enters the so-called convergent-divergent or de Laval nozzle, it accelerates as it is compressed, reaching supersonic speeds as the nozzle expands. 
“I thought, okay, I’m going to treat this thing like a little wind tunnel,” he said. 
To do so, he put a convergent-divergent nozzle at the opposite end of the tube from where the ball exists and behind that, a pressure chamber made out of PVC tubing. 
When the chamber is pumped up to about 45 pounds per square inch, it breaks the seal.

“That pressurized air goes through the nozzle just like it does in a supersonic wind tunnel and accelerates to supersonic speed out the other end and pushes the ball ahead of it,” he said. 
“At least, that’s what we think is going on,” he added. “We haven’t done any analyses on this … we are still doing some more tests … but whatever is going on, it is definitely coming out at Mach 1.23.”
Yes, that’s fast; faster than F-16 flying at top speed at sea level, noted MIT’s Physics xrXiv Blog
French and colleagues Craig Zehrung and Jim Stratton describe the gun in a paper posted Jan. 22 onarXiv.org, a server where pre-prints of scientific papers are posted. 

* John Roach is a contributing writer for NBC News.  (Enero 2012)

Alan Turing, padre de la informática

Creo que es muy importante destacar las biografías de personas que dejaron huellas en los diferentes campos de estudio o interés humano. En este caso la Ciencia y tecnología.No importa que ya no estén con nosotros, no importa que sean parte de una historia que las nuevas generaciones No consideran o quieren olvidar.

Lo importante es tener en cuenta que el progreso de la ciencia y tecnología no se debe al aporte de un Único país o persona, sino del aporte de todos, países y personas, en mayor o menor medida; pero aporte al fin. 

Por tanto el destacar lo positivo de algunas de esa multitud de personas, Científicos, ingenieros y técnicos en este Blog, es algo que no solo es un reconocimiento publico; sino una obligación de dejar realidades y bases solidas a las nuevas generaciones de humanos que sigan este camino o que quieran saber cual es el origen o la razón de todos estos logros en Ciencia y tecnología.

Hasta siempre.
Carlos Tigre sin Tiempo = CTsT

*********************************************************************************

La ciencia y la tecnología conmemoran a Alan Turing, padre de la informática

La comunidad científica y tecnológica conmemora el centenario del amado padre de la computación moderna, el genio matemático británico Alan Turing, cuyos decisivos trabajos de descodificación de mensajesfueron claves para vencer a la alemania nazi durante la Segunda Guerra Mundial.
El 23 de junio se celebra el centenario de su nacimiento en Londres, y muchas ciudades han organizado conferencias y exposiciones para rendir homenaje a la labor de un hombre considerado un auténtico genio de las matemáticas, pero que fue perseguido durante toda su vida por su homosexualidad.

«Turing es probablemente la única persona que ha hecho contribuciones que han cambiado el mundo en los tres tipos de inteligencia: la del ser humano, la artificial y la militar», afirmaba en un editorioal reciente la revista prestigiosa científica Nature.

Turing murió a la edad de 41 años, envenenado con cianuro (hay dudas sobre si se suicidó realmente, como comunmente se afirmaba hasta ahora), tras ser declarado culpable en 1952 por «indecencia grave» debido a su homosexualidad -ilegal en el Reino Unido en ese momento- y ser sometido castración forzada química. Hay que destacar que el gobierno británico no pidió explícitamente perdón por el trato cruel, discriminatorio e «inhumano» que recibió el científico hasta nada menos que 2009.
Algunos creen que el científico, conocido por su excentricidad, se suicidó en 1954 al comer una manzana envenenada, pero nunca se pudo probar. En cualquier caso, el monumento dedicado a él cerca de la Universidad de Manchester le representa en un banco y con una manzana en la mano.

Poseedor de una privilegiada intuición, Turing sentó las bases de la computación moderna y los criterios para la inteligencia artificial, además de ser conocido sobre todo por romper los códigos utilizados por el ejército alemán y su máquina codificadora Enigma, algo que salvó millones de vidas al acortar la II Guerra Mundial.

Pero su trabajo va más allá. En 1936, Turing, que había anunciado planes para «construir un cerebro», publicó un artículo describiendo la «máquina universal de Turing»; Fue el primero en considerar la posibilidad de programar una máquina mediante ‘datos’ de modo que puedan llevar a cabo otras tareas al mismo tiempo, al igual que los ordenadores de hoy en día.
De hecho, Google cambia su logotipo hoy, a modo de homenaje, con un espectacular ‘doodle’ que emula precisamente esa ‘maquina de Turing’ que planteó de forma teórica.
Cuando fue construida finalmente por otros científicos en 1950, la primera versión del motor de Computación Automática (ACE) de Turing fue la máqina computadora más rápida del mundo.

* Texto de Marietta Le Roux (Afp) | ELMUNDO (Paris-Madrid)

Científicos españoles descubren un nuevo material láser aplicable en medicina

Elimina la necesidad de usar grandes volúmenes de disolventes orgánicos, «la mayoría tóxicos y carcinogénicos»


Un equipo de la Facultad de Ciencia y Tecnología de la Universidad del País Vasco (UPV/EHU), investigadores del Consejo Superior de Investigaciones Científicas (CSIC) y la Universidad Complutense de Madrid (UCM) han desarrollado un nuevo material láser que tiene aplicaciones en campos tan diversos como la medicina, la agricultura o las ciencias ambientales.

Este nuevo material láser elimina la necesidad de utilizar grandes volúmenes de disolventes orgánicos, «la mayoría tóxicos y carcinogénicos», según han informado desde la Facultad de Ciencia y Tecnología del Campus de Vizcaya.

Basado en la creación de imágenes, detección, análisis y manipulación de sistemas biológicos a través de la luz, el nuevo material mejora la eficiencia y la estabilidad de los colorantes comerciales que se emplean en biofotónica.

Además, el trabajo realizado por los investigadores ha sido publicado en la revista «Nature Photonics».
«Eficiente y duradera»

En este sentido, los científicos han obtenido «por primera vez» una emisión «eficiente y duradera» de luz láser roja gracias a la incorporación de dos moléculas colorantes que se presentan confinadas en nanopartículas de látex dispersas en agua.

Según han explicado, «la longitud de onda de la luz roja es clave para la terapia fotodinámica, con usos, por ejemplo, en oftalmología y dermatología».

«La utilización, en biomedicina, de emisores de luz roja, con una longitud de onda superior a 650 nanómetros, tiene ciertas ventajas ya que los tejidos biológicos son más transparentes a ella y la luz puede profundizar más, lo que facilita su uso en cirugía y en tratamientos de terapia fotodinámica, basados en la activación por luz de medicamentos ingeridos», ha explicado el investigador del CSIC Luis Cerdán, que trabaja en el Instituto de Química Física Rocasolano y pertenece al grupo que ha llevado a cabo la caracterización láser y el estudio teórico.

El uso de colorantes comerciales para estas aplicaciones estaba limitado, hasta ahora, «por la poca luz de excitación que absorbían», un inconveniente que reducía su eficiencia.

Asimismo, los colorantes «suelen dañarse con facilidad cuando son excitados, lo que reduce su utilidad tecnológica y hace aumentar el coste económico».

Para resolver estos problemas, los científicos han recurrido a un proceso de transferencia de energía conocido como Förster Resonance Energy Transfer (FRET, por sus siglas en inglés), basado en incorporar dos colorantes: uno donador, capaz de absorber eficientemente la excitación y que apenas se daña, y otro aceptor, que emite luz tras haber recibido la energía del primero.

Según ha explicado el investigador de la UCM Eduardo Enciso, que ha llevado a cabo la síntesis de las nanopartículas y colaborado en el análisis teórico, «empleamos los colorantes Rhodamina 6G como donador y Azul de Nilo como aceptor. Para garantizar la proximidad de los colorantes y, por tanto, una mayor eficiencia, los confinamos en nanopartículas poliméricas de 50 nanómetros de diámetro dispersas en agua».
Vida útil

En este sentido, Enciso ha añadido que «al integrar los colorantes en estas estructuras se reducen los procesos que degradan sus moléculas tras ser excitadas por la luz, una situación especialmente grave en los colorantes con emisión roja, lo que además evita la pérdida de sus propiedades de emisión y multiplica por ocho su vida útil».

Por otro lado, la caracterización fotofísica ha permitido estudiar el proceso de transferencia de energía en el sistema, que se produce «muy rápidamente», por debajo de los 500 picosegundos (un picosegundo es la billonésima parte de un segundo).

Según los investigadores Jorge Bañuelos e Iñigo López Arbeloa, que han llevado a cabo esta parte de la investigación en la Facultad de Ciencia y Tecnología de la UPV/EHU, «el mecanismo de transferencia de energía es muy complejo, ocurre principalmente por la interacción de los dipolos eléctricos de los colorantes donadores y aceptores y se produce a una distancia media de tres nanómetros».