A Proposed Learning Model Based on Fog Computing Technology
Mohamed Saied M. ElSayed Amer, Lecturer, Canadian International College
Today and during the current years, the internet is a place to access learning resources that allow users to deal with environmental education and implementation of software applications for learning purposes. Thus, the use of distance learning application software has been widely adopted in the learning field to improve the quality of learning. In this context, approaches based on Fog Computing have been used to store and process the learning resources generated from these learning environments and make them closer to end-users. However, using Cloud only could create delays or latency that are intolerable for learning applications. Thus, the Fog Computing paradigm emerged as a proposed solution to avoid and prevent the latency or delay that occurs when a lot number of software application users access it. Moreover, requirements of performance, availability of learning platforms need to be clearly defined in approaches that aim to avoid latency and delay of response. So, this article shows an experimental architecture model based on Fog Computing as extending for Cloud services that are created to facilitate the management of learning resources. This model uses JSON formatting techniques to provide resources exchange over Fog Nodes and to carry out the learning processes in a distributed way to end-users. Finally, using fog computing in a learning environment makes the learning resources more closer to the end user as described in the results of this paper which shows that using fog computing gives high performance.
fog computing, fog nodes, latency, learning model, learning performance.
Implementation of Minimips Data Path using System Verilog
Darshan Vaghani and Sachin Gajjar, Department of Electronics and Communication Engineering, Nirma University, Ahmedabad, India
In this Paper, The implementation of the Minimips data path using system verilog is discussed. System Verilog is a very useful platform to verify and design digital design. Some of the very important concepts like interface, Assertions, Coverage, Class, Dynamic Array are very useful in the design and verification of the digital design. System Verilog concept enables to check the efficiency of the Testbench. To Design the Minimips Data path basic Architecture and Intruction set of the Minimips data path is required. The first ALU of the Data path is designed and verified using Quartus II software after that design all remaining module of the data path designed and connect them using the interface. All the Supported instructions are verified using the cadence Xcelium simulator.
Interface, Queues, Minimips single -cycle data path.
VLSI Physical Design using Opensource Tool
Kartik Jain and Pratik Navadiya, Department of Electronics and Communication Engineering, Institute of Technology Nirma University, Ahmedabad, India
Nowadays ASIC design flow is a very mature process in silicon turnkey design. It can refer to semiconductor solutions design for any specific application. One specific application is FPGA (fieldprogrammable gate arrays), it can be programmable multiple times to perform a different function. Sometimes ASIC (Application-specific integrated circuit) is referred to as SoC (system on chip) also. To perform successful ASIC design, the engineer must follow or proven ASIC design flow which is based on a good understanding of its specification, low power design, performance, and requirement. Every stage of the ASIC design cycle has an EDA tool that helps to implement or run every stage of the ASIC design flow. VLSI physical design is a process of transforming any circuit into a physical layout, it describes the position of cells and the route for the interconnects between them. VLSI physical design stages are chip partitioning, synthesis, floor planning, power planning, placement, clock tree synthesis (CTS), signal routing, and physical verification. In this paper, we use two different open-source EDA tools. First is the proton EDA tool. Proton is a fully open-source place and route tool. It uses Iverilog, Yosys- Qflow for synthesis step, Gray Wolf for placement stages, Qrouter for routing stages. And second is the Qflow manager tool. In this tool, we run all the stages of physical design and generate each stages report also. Here we take full adder as an example and run the all stages in these two EDA tools and Compare both EDA tools.
Proton EDA tool, Qflow manager tool, ASIC design flow, VLSI physical design.
Bulk Driven Logarithmic Amplifier for Ultra - low voltage low power biomedical applications
Dipesh Panchal and Amisha Naik, Department of Electronics & Communication Engineering, Nirma University, Gujarat
Mostly, for sensor interface systems, analog front ends dissipate maximum power which includes an amplifier, filters, and data converter. This paper proposed utilizing a novel approach for ultra-low voltage ultra-low-power logarithmic amplifier(LA) using bulk driven Non-conventional method. The amplifier block of analog Front End utilized as a Logarithmic amplifier, based on the progressive-compression parallel-summation architecture with DC offset cancellation by adding an off-chip coupling capacitor at each stage. The core cell differential amplifier with current bias as load utilized for symmetric output. The circuit operates with a 0.25V power supply voltage and dissipates 5 nW. The Simulated input dynamic range is about 46.27dB, which covers the input amplitudes ranging from 0.1mV to 1V, and the -3-dB bandwidth of the amplifier is from 100Hz to 1kHz with Simulated total input-referred noise is 4μV@1kHz using cadence virtuoso.
Bulk Driven, Gate Driven, Biomedical Applications, Variable Gain Amplifier, Programmable Gain Amplifier.
Security Assessment Rating Framework for Enterprises using MITRE ATT&CK® Matrix
Akash Srivastava, Bhavya Bansal, Chetan Verma, Hardik Manocha and Ratan Gupta, India
Threats targeting cyberspace are becoming more prominent and intelligent day by day; this inherently leads to a dire demand for continuous security validation and testing. Using this paper, we aim to provide a holistic and precise security analysis rating framework for organizations that increases the overall coherency of the outcomes of such testing. This scorecard is based on the security assessment performed following the globally accessible knowledge base of adversary tactics and techniques called the MITRE ATT&CK matrix. The scorecard for an evaluation is generated by ingesting the security testing results into our framework, which provides an organization’s overall risk assessment rating and the risk related to each of the different tactics from the ATT&CK matrix.
SOC, Cyber-security awareness, Cyber-security threats, Scorecard, MITRE ATT&CK.
AI, Machine Learning and Deep Learning Development and Applications
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as well as the opportunities provided by the AI applications in various decision making domains.
Artificial Intelligence, Machine Learning, Deep Learning.
Current flaws in Deep Learning: An Analysis
Bhavi Dave1 and Tejas Shyam2, 1Department of Computer Engineering, Dwarkadas J. Sanghvi College, Mumbai, India, 2The Education Journey, Mumbai, India
While Deep Learning algorithms have markedly improved the paradigm of Artificial Intelligence across domains like Natural Language Processing and computer vision, their performance comes with certain critical, potentially fatal flaws. This paper explains and analyses five areas of concern in neural networks and their design – the lack of necessary data, a lack of interpretability, software concerns during implementation, their biological plausibility, and the inability to encode knowledge. By citing and critiquing actual use-cases, challenges have been flagged. Finally, this paper makes a threefold recommendation- integrating traditional algorithms and explicit background knowledge into the newer methods, creating a hybrid design that amalgamates both supervised and unsupervised components and standardising data collection across domains. The approaches suggested herein will make Deep Learning more sustainable and impactful by reducing computational resource requirements, making systems more biologically plausible and mitigating human bias.
Deep Learning, Computer Vision, Natural Language Processing, Model interpretability, Knowledge encoding, Biological plausibility, Software development challenges.