Most powerful in New England

The new TX-Green computing system at the MIT Lincoln Laboratory Supercomputing Center (LLSC) has been named the most powerful supercomputer in New England, 43rd most powerful in the U.S., and 106th most powerful in the world. A team of experts at TOP500 ranks the world’s 500 most powerful supercomputers biannually. The systems are ranked based on a LINPACK Benchmark, which is a measure of a system’s floating-point computing power, i.e., how fast a computer solves a dense system of linear equations.

Established in early 2016, the LLSC was developed to enhance computing power and accessibility for more than 1,000 researchers across the laboratory. The LLSC uses interactive supercomputing to augment the processing power of desktop systems to process large sets of sensor data, create high-fidelity simulations, and develop new algorithms. Located in Holyoke, Massachusetts, the new system is the only zero-carbon supercomputer on the TOP500 list; it uses energy from a mixture of hydroelectric, wind, solar, and nuclear sources.

In November, Dell EMC installed a new petaflop-scale system, which consists of 41,472 Intel processor cores and can compute 1015 operations per second. Compared to LLSC’s previous

Lead to fully automated speech recognition

Speech recognition systems, such as those that convert speech to text on cellphones, are generally the result of machine learning. A computer pores through thousands or even millions of audio files and their transcriptions, and learns which acoustic features correspond to which typed words.

But transcribing recordings is costly, time-consuming work, which has limited speech recognition to a small subset of languages spoken in wealthy nations.

At the Neural Information Processing Systems conference this week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new approach to training speech-recognition systems that doesn’t depend on transcription. Instead, their system analyzes correspondences between images and spoken descriptions of those images, as captured in a large collection of audio recordings. The system then learns which acoustic features of the recordings correlate with which image characteristics.

“The goal of this work is to try to get the machine to learn language more like the way humans do,” says Jim Glass, a senior research scientist at CSAIL and a co-author on the paper describing the new system. “The current methods that people use to train up speech recognizers

A unique moving target technique

When it comes to protecting data from cyberattacks, information technology (IT) specialists who defend computer networks face attackers armed with some advantages. For one, while attackers need only find one vulnerability in a system to gain network access and disrupt, corrupt, or steal data, the IT personnel must constantly guard against and work to mitigate varied and myriad network intrusion attempts.

The homogeneity and uniformity of software applications have traditionally created another advantage for cyber attackers. “Attackers can develop a single exploit against a software application and use it to compromise millions of instances of that application because all instances look alike internally,” says Hamed Okhravi, a senior staff member in the Cyber Security and Information Sciences Division at MIT Lincoln Laboratory. To counter this problem, cybersecurity practitioners have implemented randomization techniques in operating systems. These techniques, notably address space layout randomization (ASLR), diversify the memory locations used by each instance of the application at the point at which the application is loaded into memory.

In response to randomization approaches like ASLR, attackers developed information leakage attacks, also called memory disclosure attacks. Through these software assaults,

Learning system spontaneously

MIT researchers and their colleagues have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.

The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.

This property wasn’t built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.

“This is not a proof that we understand what’s going on,” says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But

Require costly hand annotated data

In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook.

But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. Usually, the training data has to be first annotated by hand, which is prohibitively expensive for all but the highest-demand applications.

Sound recognition may be catching up, however, thanks to researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). At the Neural Information Processing Systems conference next week, they will present a sound-recognition system that outperforms its predecessors but didn’t require hand-annotated data during training.

Instead, the researchers trained the system on video. First, existing computer vision systems that recognize scenes and objects categorized the images in the video. The new system then found correlations between those visual categories and natural sounds.

“Computer vision has gotten so good that we can transfer it to other domains,” says Carl Vondrick, an MIT graduate

Venture capitalists gather to discuss

Surviving breast cancer changed the course of Regina Barzilay’s research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. That includes what treatments to recommend, but also whether a patient’s sample even warrants a cancer diagnosis, she explained at the Nov. 10 Machine Intelligence Summit, organized by MIT and venture capital firm Pillar.

“We do more machine learning when we decide on Amazon which lipstick you would buy,” said Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science at MIT. “But not if you were deciding whether you should get treated for cancer.”

Barzilay now studies how smarter computing can help patients. She wields the powerful predictive approach called machine learning, a technique that allows computers, given enough data and training, to pick out patterns on their own — sometimes even beyond what humans are capable of pinpointing.

Machine learning has long been vaunted in consumer contexts — Apple’s Siri can talk with us because machine learning enables her to understand natural human speech — yet the summit gave a glimpse of the approach’s much broader potential. Its reach could offer not only better Siris (e.g., Amazon’s “Alexa”), but improved health care

Publicly traded corporation

Cook joined Apple in 1998 and was named its CEO in 2011. As chief executive, he has overseen the introduction of some of Apple’s innovative and popular products, including iPhone 7 and Apple Watch. An advocate for equality and champion of the environment, Cook reminds audiences that Apple’s mission is to change the world for the better, both through its products and its policies.

“Mr. Cook’s brilliance as a business leader, his genuineness as a human being, and his passion for issues that matter to our community make his voice one that I know will resonate deeply with our graduates,” MIT President L. Rafael Reif says. “I am delighted that he will join us for Commencement and eagerly await his charge to the Class of 2017.”

Before becoming CEO, Cook was Apple’s chief operating officer, responsible for the company’s worldwide sales and operations, including management of Apple’s global supply chain, sales activities, and service and support. He also headed the Macintosh division and played a key role in the development of strategic reseller and supplier relationships, ensuring the company’s flexibility in a demanding marketplace.

“Apple stands at the intersection of liberal arts and technology, and we’re proud to have many outstanding MIT graduates on our team,”

Fabricate drones with a wide

This fall’s new Federal Aviation Administration regulations have made drone flight easier than ever for both companies and consumers. But what if the drones out on the market aren’t exactly what you want?

A new system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is the first to allow users to design, simulate, and build their own custom drone. Users can change the size, shape, and structure of their drone based on the

To demonstrate, researchers created a range of unusual-looking drones, including a five-rotor “pentacopter” and a rabbit-shaped “bunnycopter” with propellers of different sizes and rotors of different heights.

“This system opens up new possibilities for how drones look and function,” says MIT Professor Wojciech Matusik, who oversaw the project in CSAIL’s Computational Fabrication Group. “It’s no longer a one-size-fits-all approach for people who want to make and use drones for particular purposes.”

The interface lets users design drones with different propellers, rotors, and rods. It also provides guarantees that the drones it fabricates can take off, hover and land — which is no simple task considering the intricate technical trade-offs associated with drone weight, shape, and control.

“For example, adding more rotors generally lets you carry more weight, but you also need to think about

Researchers named ACM

This week the Association for Computer Machinery (ACM) announced its 2016 fellows, which include four principal investigators from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL): professors Erik Demaine, Fredo Durand, William Freeman, and Daniel Jackson. They were among the 1 percent of ACM members to receive the distinction.

“Erik, Fredo, Bill, and Daniel are wonderful colleagues and extraordinary computer scientists, and I am so happy to see their contributions recognized with the most prestigious member grade of the ACM,” says CSAIL Director Daniela Rus, who herself was named a fellow last year. “All of us at CSAIL are very proud of these researchers for receiving these esteemed honors.”

ACM’s 53 fellows for 2016 were named for their distinctive contributions spanning such computer science disciplines as computer vision, computer graphics, software design, machine learning, algorithms, and theoretical computer science.

“As nearly 100,000 computing professionals are members of our association, to be selected to join the top 1 percent is truly an honor,” says ACM President Vicki L. Hanson. “Fellows are chosen by their peers and hail from leading universities, corporations and research labs throughout the world. Their inspiration, insights and dedication bring immeasurable benefits that improve lives and help drive the global economy.

Unexpected career path in the medical field

During January of her junior year at MIT, Caroline Colbert chose to do a winter externship at Massachusetts General Hospital (MGH). Her job was to shadow the radiation oncology staff, including the doctors that care for patients and medical physicists that design radiation treatment plans.

Colbert, now a senior in the Department of Nuclear Science and Engineering (NSE), had expected to pursue a career in nuclear power. But after working in a medical environment, she changed her plans.

She stayed at MGH to work on building a model to automate the generation of treatment plans for patients who will undergo a form of radiation therapy called volumetric-modulated arc therapy (VMAT). The work was so interesting that she is still involved with it and has now decided to pursue a doctoral degree in medical physics, a field that allows her to blend her training in nuclear science and engineering with her interest in medical technologies.

She’s even zoomed in on schools with programs that have accreditation from the Commission on Accreditation of Medical Physics Graduate Programs so she’ll have the option of having a more direct impact on patients. “I don’t know yet if I’ll be more interested in clinical work, research, or both,”

Policy and technology

“When you’re part of a community, you want to leave it better than you found it,” says Keertan Kini, an MEng student in the Department of Electrical Engineering, or Course 6. That philosophy has guided Kini throughout his years at MIT, as he works to improve policy both inside and out of MIT.

As a member of the Undergraduate Student Advisory Group, former chair of the Course 6 Underground Guide Committee, member of the Internet Policy Research Initiative (IPRI), and of the Advanced Network Architecture group, Kini’s research focus has been in finding ways that technology and policy can work together. As Kini puts it, “there can be unintended consequences when you don’t have technology makers who are talking to policymakers and you don’t have policymakers talking to technologists.” His goal is to allow them to talk to each other.

At 14, Kini first started to get interested in politics. He volunteered for President Obama’s 2008 campaign, making calls and putting up posters. “That was the point I became civically engaged,” says Kini. After that, he was campaigning for a ballot initiative to raise more funding for his high school, and he hasn’t stopped being interested in public policy since.

High school was

Big data manageable

One way to handle big data is to shrink it. If you can identify a small subset of your data set that preserves its salient mathematical relationships, you may be able to perform useful analyses on it that would be prohibitively time consuming on the full set.

The methods for creating such “coresets” vary according to application, however. Last week, at the Annual Conference on Neural Information Processing Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique that’s tailored to a whole family of data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience, among many others.

“These are all very general algorithms that are used in so many applications,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper. “They’re fundamental to so many problems. By figuring out the coreset for a huge matrix for one of these tools, you can enable computations that at the moment are simply not possible.”

As an example, in their paper the researchers apply their technique to a matrix —

Healthcare Improving The System

According to TIME Magazine, the American healthcare system is the worst amongst the 11 wealthiest nations, and that’s even after relative improvements following the Affordable Care Act. However, there’s a silver lining amidst the surge of startups and small businesses, a sector which enjoyed a healthy boom following the 2009 Great Recession. Startups in healthcare have the ability to improve the system, and they’ve been doing so at an impressive clip.

According to Mark Diamond, CMP at the Senior Education Counsel, “While the idea of self-funding your long-term care on an as-needed basis is admirable, you can’t necessarily guarantee that you will have enough. Life happens—and while we may not be able to plan for a lot of it, we can certainly try to cover ourselves in the event of something happening.”

Take a look at these healthcare startups who are making big strides in improving the system:

1. Moxe Health: Healthcare data integration is a huge struggle for medical providers. In fact, simply moving to electronic systems in lieu of keeping a costly hardcopy system has been a challenge. Moxe keeps data safe, keeps integration in pace with the latest tools, and offers a proprietary software eliminating the needs for a middleman

Your Website from Hackers

Hackers are rampant on the internet and it’s not unusual for people to get have their websites destroyed because of an attack. While these cyber attacks are actually quite common, there are many ways to be able to protect oneself from hackers that roam around the internet. In order to protect your information from these hackers, you can make use of these tips to beef up your security of your website or your personal accounts.

Get a proper web host

The first step would be to get a good web host for your website. I know that sometimes it is actually tempting to get the cheaper and less well known web hosts but it will be more expensive in the future when your website gets attacked by some hackers. If you are a user of WordPress, one the most recommended servers for you to use would of course be WP Engine since this is the most compatible and most secured enterprise solutions for WordPress. With regard to the security of host, WPEngine automatically backs up all your data everyday and contains protection for your login details as well as Ddos protection. With regard to virus protection, WP Engine can automatically kick out

Computers that explain themselves

Machines that predict the future, robots that patch wounds, and wireless emotion-detectors are just a few of the exciting projects that came out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) this year. Here’s a sampling of 16 highlights from 2016 that span the many computer science disciplines that make up CSAIL.

Robots for exploring Mars — and your stomach

  • A team led by CSAIL director Daniela Rus developed an ingestible origami robot that unfolds in the stomach to patch wounds and remove swallowed batteries.
  • Researchers are working on NASA’s humanoid robot, “Valkyrie,” who will be programmed for trips into outer space and to autonomously perform tasks.
  • A 3-D printed robot was made of both solids and liquids and printed in one single step, with no assembly required.

Keeping data safe and secure

  • CSAIL hosted a cyber summit that convened members of academia, industry, and government, including featured speakers Admiral Michael Rogers, director of the National Security Agency; and Andrew McCabe, deputy director of the Federal Bureau of Investigation.
  • Researchers came up with a system for staying anonymous online that uses less bandwidth to transfer large files between anonymous users.
  • A deep-learning system called AI2 was shown to be able to predict 85 percent of cyberattacks with

The Benioff Scale and SAP’s Cloud Leadership Conundrum

Cloud computing, the artist formerly known as SaaS, has always been a proving ground for dynamic leadership. The standard – brash, outspoken, ubiquitous, successful – was set once upon a time by Marc Benioff, and ever since it’s been easy to measure cloud leadership by what I call the Benioff Scale. On a Benioff Scale of 1-10, where 1 is Ginni (Ginni who?) Rometty of IBM, and 10 is Marc himself, measuring cloud leadership by how many Benioffs a particular leader generates is as good a method as any.

Amazon’s Jeff Bezos clearly rates 10 Benioffs, and Microsoft’s Satya Nadella gets a 10 as well. Larry Ellison – how about if I pass on that one? Meg Whitman – 4 or 5 at best. Larry Page gets a 10, of course, though his enterprise cloud score would be much lower. Infor’s Charles Philips get an 8 for sincerity and vision, but the continued lack of customer momentum towards the cloud drives his overall score much lower.

The point is that the cloud is a marketer’s market, and the higher up on the Benioff scale an executive can go – the more brash, outspoken, ubiquitous and successful the leader is –  the better

Technology That Job You Want

For this issue of ComputingEdge, we asked Phillip A. Laplante—professor of software engineering and co-director of the Software Engineering Group at Pennsylvania State University—about career opportunities in healthcare technology. Laplante’s research interests include real-time and embedded systems, image processing, and artificial intelligence. He co-authored the article “The Internet of Things in Healthcare: Potential Applications and Challenges” in IT Professional’s May/June 2016 issue.

ComputingEdge: What careers in healthcare technology will see the most growth in the next several years?

Laplante: It’s no secret that healthcare careers often require certifications and licenses, and those who earn them will command the higher salaries and have the best career potential. Every healthcare career will require computer and technical proficiency, a trend that will only increase in the future.

ComputingEdge: What advice would you give college students to give them an advantage over the competition?

Laplante: In the long run, a solid work ethic beats degrees, certification, and experience. Show up on time, work hard, and be respectful. It’s old-fashioned advice, but it still applies.

ComputingEdge: What advice would you give people changing careers midstream?

Laplante: You must be willing to reinvent yourself in all aspects. You might have to change locations, take a lower salary, or drop in the corporate hierarchy to increase your upward potential in the long run. Taking risks

Improve Your Blogging Efforts

2017 will be a big year for content marketing, with brands focusing on better ways to reach a wide audience. Marketers have spent enough time honing their content marketing strategies over the past few years and they now look for more sophisticated ways to drive traffic. As brands prepare to step up their marketing games in 2017, it’s more important than ever that they have the right tools for the job. Here are 20 tools that will help you take your business blog to the next level in 2017.

HubSpot’s Blog Topic Generator

If you try to generate consistent blog content, you may find ideas are the biggest challenge. HubSpot provides this Blog Topic Generator to help.

Feedly

Another way to come up with ideas is to keep an eye on what’s trending. Feedly gives you a regular helping of the latest news specific to your subject matter interests.

WordPress Editorial Calendar

Top content marketers use editorial calendars to plan and manage posts. WordPress’s plugin is a free tool that can help.

Guestpost

In 2017, marketers that win at content marketing will have the tools necessary to amplify their messages. Guestpost helps brands find influencers to help spread the word.

Brandpoint Hub

If your content marketing efforts feel chaotic, Brandpoint

The Future of Tech

Technology trends that will reach adoption

1.      Industrial IoT

With many millions of IoT sensors deployed in dozens of industrial strength real-world applications, this is one of the largest and most impactful arenas for big data analytics in 2017.

2.      Self-driving Cars

In Silicon Valley, one can easily see up to three self-driving cars on the same street. While adoption is less likely in general use, the broader adoption will likely occur in constrained environments such as airports and factories.

3.      Artificial Intelligence, Machine Learning, Cognitive Computing

These overlapping areas are a fundamental requirement for big data analytics and for other areas of control and management. Machine learning, and deep learning in particular, are quickly transitioning from research lab to commodity products. On the software side, advanced engines and libraries from industry leaders, such as Facebook and Google, are making it to open source. On the hardware side, we see continually improving performance and scalability from existing technologies (CPUs and GPUs), as well as emerging accelerators. Consequently, writing domain-specific applications that can learn, adapt, and process complex and noisy inputs in near real time is easier than ever and a wide range of new applications is emerging.

4.      5G

While it is unlikely that 5G will have immediate

Improving Campus Security

College campuses aren’t inherently more dangerous than other areas of the country, but with so many young people packed in one place, security is a natural concern. Campus shootings and attacks, rape culture, and other high-profile incidents are putting a spotlight on the importance of campus security, and colleges across the country are looking to new forms of technology to help address student and parent worries.

So how are colleges pushing the limits of what can be accomplished with security technology?

Technological Breakthroughs

These are some of the most important—and effective ways—colleges are taking action:

1.      Emergency communication protocols. Not all crimes can be prevented. In the event of a violent attack or a natural disaster, communication is key to keeping as many people as safe as possible, and with smartphones in almost every student’s pocket, there are more ways than ever to spread that information. Colleges today have the ability to send out mass text messages or mass phone calls alerting students to emergency situations (along with instructions on what to do or where to go). Collecting and transmitting this information quickly, and in a way that all students can access, has become a top priority for colleges everywhere.

2.      Anonymous crime reporting. Anonymous