You are currently viewing Effective memory management strategy can support chips with thousands of cores

Effective memory management strategy can support chips with thousands of cores

Effective memory management strategy:

Cloud Computing is generally used for keeping organizational data in the cloud and then organizations access those data from the cloud. The data here moves from one device to another and it is possible only if you are online or connected to the internet. There is no issue using the cloud to store your data, but the way we are completely relying on the internet. Yingwei Wang of the Department of Computer Science, at the University of Prince Edward Island, Charlottetown, Canada, said, “If any problem happens to the servers or an internet connection is not available, the user cannot access their data.”
Organizations run with complex data and to maintain those data manually between the cloud and interconnected local computers is quite important for the admin. According to Wang’s architecture, there are dew servers that clasp on the local systems that appear as a buffer in between the local user and the cloud servers avoiding the problem of data inconsistency from source to destination. It is possible only if one can return back to the old school approach in which data is carried only on the local server whether it is in network or not. Wang elucidated, “The dew server and its related databases have two functions: first, it provides the client with the same services as the cloud server provides; second, it synchronizes dew server databases with cloud server databases.”
But what is dew server? It is a lightweight local server that continuously holds a copy of the given user’s data making it available, online or offline and synchronizing once more with the cloud server just right after connection is available again. The cloud dew architecture is also used to make websites work without internet or offline. This is a really significant concept for organizations to reduce internet data expenses, where organizations are dependant on internet connectivity to process huge data. Evidently filling forms online or exchanging emails would be difficult this way where you always need an internet connection, as the code communicates with the server through several responses and redirects. But functions like displaying file images, playing audio or video will be possible through “dewsite” from the web throughout the last connection made.
Here is the logic. In a 128 core chip, the new technique would require only one third memory as its predecessor. If talking about Intel 72 core high performance chip, the space saving expands to 80 percent, and with 1,000 core chip, 96 percent. It simply means when multiple cores are stored at the same location and they read the data, there is no problem. Incompatibility only appears when one of the core would like to update the shared data. In directory system the chips looks up which cores are working on that data and sends them messages invalidating their locally stored copies of it. Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and first author on the new paper states, “Directories guarantee that when a write happens, no stale copies of the data exist. After this write happens, no read to the previous version should happen. So this write is ordered after all the previous reads in physical-time order.”
Discussion Overview
Besides the saving space in memory, Time machine and spacecraft also removes the requirement to broadcast invalidation messages to all the cores that are sharing a data item. In enormously multicore chips, Yu states, this also could lead to performance improvements. “We didn’t see performance gains from that in these experiments, but that may depend upon the benchmarks”, Yu informed. –the industry-standard programs on which Yu and Devadas tested Tardis. “They’re highly optimized, so maybe they already removed this bottleneck,” Yu states.
“There have been other people who have looked at this sort of lease idea,” informed Christopher Hughes, a principal engineer at Intel Labs, “but at least to my knowledge, they tend to use physical time. You would give a lease to somebody and say, ‘OK, yes, you can use this data for, say, 100 cycles, and I guarantee that nobody else is going to touch it in that amount of time.’ But then you’re kind of capping your performance, because if somebody else immediately afterwards wants to change the data, then they’ve got to wait 100 cycles before they can do so. Whereas here, you can just advance the clock. That is something that, to my knowledge, has never been done before. That’s the key idea that’s really neat. Hughes said, however, that chip designers are conservative by nature. “Almost all mass-produced commercial systems are based on directory-based protocols,” he states. “We don’t mess with them because it’s so easy to make a mistake when changing the implementation.” But “part of the advantage of their scheme is that it is conceptually somewhat simpler than current schemes,” he adds.