2021 IEEE North American School of Information Theory
Poster Abstracts
Revisiting Complexity and the BiasVariance Tradeoff Raaz Dwivedi (University of California, Berkeley, India)
Abstract: The recent success of highdimensional models, such as deep neural networks (DNNs), has led many to question the validity of the biasvariance tradeoff principle in high dimensions. We reexamine it with respect to two key choices: the model class and the complexity measure. We argue that failing to suitably specify either one can falsely suggest that the tradeoff does not hold. This observation motivates us to seek a valid complexity measure, defined with respect to a reasonably good class of models. Building on Rissanen's principle of minimum description length (MDL), we propose a novel MDLbased complexity (MDLCOMP). We focus on the context of linear models, which have been recently used as a stylized tractable approximation to DNNs in highdimensions. MDLCOMP is defined via an optimality criterion over the encodings induced by a good Ridge estimator class. We derive closedform expressions for MDLCOMP and show that for a dataset with \(n\) observations and d parameters it is emph{not always} equal to \(d/n\), and is a function of the singular values of the design matrix and the signaltonoise ratio. For random Gaussian design, we find that while MDLCOMP scales linearly with \(d\) in lowdimensions (d<n), for highdimensions (d>n) the scaling is exponentially smaller, scaling as log d. We hope that such a slow growth of complexity in highdimensions can help shed light on the good generalization performance of several welltuned highdimensional models. Moreover, via an array of simulations and realdata experiments, we show that a datadriven PracMDLCOMP can inform hyperparameter tuning for ridge regression in limited data settings, sometimes improving upon crossvalidation.
Distributed PrivacyPreserving Machine Learning Irem Ergun (University of California, Riverside, USA)
Abstract: We currently live in the information era, where information about almost any entity and person is stored digitally on some cloud platform. This availability of data, coupled with the recent technological advancements, such as GPUs, distributed computing infrastructures and other high performance computing gadgets, led to the rise of distributed machine learning. Currently, distributed machine learning algorithms are run for all kinds of tasks on different types of datasets, many of which may contain sensitive information, such as purchase history or healthcare records. Thus, preserving the privacy of sensitive information is a pressing concern of machine learning practitioners.
Unfortunately, achieving privacy usually comes with a cost: possible degradation in the accuracy of the algorithm, and/or perhaps, a decline in efficiency. One of the main goals of privacy preserving machine learning is to find the optimal point in this threeway tradeoff between privacy, accuracy, and efficiency so that the algorithms are preserving the privacy of the individuals while offering good utility in an efficient manner. The first step in dealing with this vast task is to describe the setting in which the training happens and the adversarial behavior that we want protection from, and to specify an aspect of the problem to focus on. The learning environment could be centralized or decentralized, depending on whether there is a central authority present coordinating the training. The privacy guarantee offered could either be cryptographic (security against computationally bounded adversaries) or information theoretic (security against computationally unbounded adversaries).
We will focus on two different aspects of privacy preserving distributed learning (under informationtheoretic privacy guarantees): 1) accelerating training in a centralized learning environment by distributing the computation load effectively across multiple computing agents, and 2) achieving fully decentralized collaborative learning. The algorithms that are developed to solve these different problems share the use of some common techniques, such as Lagrange coding, Shamir secret sharing and data quantization. Throughout the presentation, we will give an overview of these techniques, describe the algorithms, go over the results, and give some potential future directions.
Concentrated Stopping Set Design for Coded Merkle Tree: Improving Security Against Data Availability Attacks in Blockchain Systems Debarnab Mitra (University of California, Los Angeles, USA)
Abstract: In certain blockchain systems, light nodes are clients that download only a small portion of the block. Light nodes are vulnerable to data availability (DA) attacks where a malicious node hides an invalid portion of the block from the light nodes. Recently, a technique based on erasure codes called Coded Merkle Tree (CMT) was proposed by Yu et al. that enables light nodes to detect a DA attack with high probability. The CMT is constructed using LDPC codes for fast decoding but can fail to detect a DA attack if a malicious node hides a small stopping set of the code. To combat this, Yu et al. used wellstudied techniques to design random LDPC codes with high minimum stopping set size. Although effective, these codes are not necessarily optimal for this application. In this work, we demonstrate a more specialized LDPC code design to improve the security against DA attacks. We achieve this goal by providing a deterministic LDPC code construction that focuses on concentrating stopping sets to a small group of variable nodes rather than only eliminating stopping sets. We design these codes by modifying the Progressive Edge Growth algorithm into a technique called the entropyconstrained PEG (ECPEG) algorithm. This new method demonstrates a higher probability of detecting DA attacks and allows for good codes at short lengths.
SingleShot Compression for Hypothesis Testing Fabrizio Carpi (New York University, USA)
Abstract: Enhanced processing power in the cloud allows constrained devices to offload costly computations: for instance, complex data analytics tasks can be computed by remote servers.
Remote execution calls for a new communication paradigm that optimizes performance on the analytics task within a rate constraint, instead of the traditional ratedistortion paradigm.
In this paper, we consider a simple binary hypothesis testing scenario where the transmitter performs fixedlength singleshot compression on data sampled from one of two distributions; the receiver performs a hypothesis test on multiple received samples to determine the correct source distribution.
To this end, we formulate the taskaware compression problem as finding the optimal source coder that maximizes the asymptotic performance of the hypothesis test on the receiver side under a rate constraint.
We propose a new source coding strategy based on a greedy optimization procedure and show that the proposed compression scheme outperforms universal fixedlength singleshot coding scheme for a range of rate constraints.
Private Linear Transformation: The Single Server Case Nahid Esmati (Texas A&M University, USA)
Abstract: In this poster, we introduce the problem of singleserver Private Linear Transformation, or PLT for short. In PLT, there is a user that wishes to compute a number of linear combinations of a subset of data items belonging to a dataset stored on a remote single server. The objective is to minimize the total amount of information that the user needs to download from the server in order to perform the computation while keeping the identities of the data items required for the computation private. The PLT problem generalizes the Private Information Retrieval (PIR) and Private Linear Computation (PLC) problems, which have recently received a significant attention from the research community. We will discuss our converse results and achievability schemes for PLT under two different notions of privacy, called joint privacy and individual privacy, which were previously considered for PIR and PLC.
SuperstringBased Sequence Obfuscation to Thwart Pattern Matching Attacks Bo Guan (University of Massachusetts Amherst, USA)
Abstract: User privacy can be compromised by matching user data traces to records of their previous behavior. The matching of the statistical characteristics of traces to prior user behavior has been widely studied. However, an adversary can also identify a user deterministically by searching data traces for a pattern that is unique to that user. Our goal is to thwart such an adversary by applying small artificial distortions to data traces such that each potentially identifying pattern is shared by a large number of users. Importantly, in contrast to statistical approaches, we develop dataindependent algorithms that require no assumptions on the model by which the traces are generated. By relating the problem to a set of combinatorial questions on sequence construction, we are able to provide provable guarantees for our proposed constructions. We also introduce datadependent approaches for the same problem. The algorithms are evaluated on synthetic data traces and on the Reality Mining Dataset to demonstrate their utility.
Coded Sparse Matrix Computation Schemes That Leverage Partial Stragglers Anindya Bijoy Das (Iowa State University, USA)
Abstract: Coded matrix computation utilizes concepts from erasure coding to mitigate the effect of slow worker nodes (stragglers) in the distributed setting. While this is useful, there are issues with applying, e.g., MDS codes in a straightforward manner for this problem. Several practical scenarios involve sparse matrices. MDS codes typically require dense linear combinations of submatrices of the original matrices which destroy their inherent sparsity; this leads to significantly higher worker computation times. Moreover, treating slow nodes as erasures ignores the potentially useful partial computations performed by them.
In this work we present schemes that allow us to leverage partial computation by stragglers while imposing constraints on the level of coding that is required in generating the encoded submatrices. This significantly reduces the worker computation time as compared to previous approaches and results in improved numerical stability in the decoding process. Exhaustive numerical experiments support our findings.
On Graph Matching Using Generalized Seed SideInformation Mahshad Shariatnasab (North Dakota State University, USA)
Abstract: Graph matching is a fundamental problem which finds applications in social network deanonymization, image processing and natural language processing among others. In this work, graph matching for stochastically generated graphs in the presence of generalized seed sideinformation is considered. Given a pair of randomly generated correlated ErdösRényi graphs, the objective is to recover the labels of the vertices of the second (anonymized) graph using the labels of the vertices of the first (deanonymized) graph. Furthermore, it is assumed that the matching strategy has access to a collection of shortlists  called ambiguity sets  of possible labels for the vertices of the second graph. This scenario can be viewed as a generalization of the seeded graph matching problem, where the ambiguity sets take a specific form such that the exact labels for a subset of vertices in the second graph are known prior to matching. We provide theoretical guarantees in the form of necessary and sufficient conditions on the graph and shortlist statistics under which matching is possible.
ByzShield: An Efficient and Robust System for Distributed Training Konstantinos Konstantinidis (Iowa State University, USA)
Abstract: Training of large scale models on distributed clusters is a critical component of the machine learning pipeline. However, this training can easily be made to fail if some workers behave in an adversarial (Byzantine) fashion whereby they return arbitrary results to the parameter server (PS). A plethora of existing papers consider a variety of attack models and propose robust aggregation and/or computational redundancy to alleviate the effects of these attacks. In this work we consider an omniscient attack model where the adversary has full knowledge about the gradient computation assignments of the workers and can choose to attack (up to) any q out of K worker nodes to induce maximal damage. Our redundancybased method ByzShield leverages the properties of bipartite expander graphs for the assignment of tasks to workers; this helps to effectively mitigate the effect of the Byzantine behavior. Specifically, we demonstrate an upper bound on the worst case fraction of corrupted gradients based on the eigenvalues of our constructions which are based on mutually orthogonal Latin squares and Ramanujan graphs. Our numerical experiments indicate over a 36% reduction on average in the fraction of corrupted gradients compared to the state of the art. Likewise, our experiments on training followed by image classification on the CIFAR10 dataset show that ByzShield has on average a 20% advantage in accuracy under the most sophisticated attacks. ByzShield also tolerates a much larger fraction of adversarial nodes compared to prior work.
DoF Region for Joint Imaging & Uplink Communication Nishant Mehrotra (Rice University, USA)
Abstract: In this work, we establish the fundamental limits for joint imaging and uplink communication. We consider a coherent dualfunction base station that illuminates a scene while also decoding uplink data from a communication user. To determine the resolution and rate combinations that can be supported simultaneously by the base station, we derive outer and inner bounds on the joint imagingcommunication degrees of freedom region. We show that the outer and inner regions match under certain special regimes.
Attributed Graph Alignment Ning Zhang (University of British Columbia, Canada)
Abstract: Motivated by various data science applications including deanonymizing user identities in social networks, we consider the graph alignment problem, where the goal is to identify the vertex/user correspondence between two correlated graphs. Existing work mostly recovers the correspondence by exploiting the useruser connections. However, in many realworld applications, additional information about the users, such as user profiles, might be publicly available. In this paper, we focus on aligning the attributed graphs, where additional user information, referred to as attributes, is incorporated to assist graph alignment. We establish sufficient and necessary conditions for recovering vertex correspondence exactly, where the conditions match for a wide range of practical regimes. Our results recover existing tight informationtheoretic limits for models where only the useruser connections are available, and further span the full spectrum between these models and models where only attribute information is available.
Variations on the Magic Square Game and Their Communication Class Noah Brüstle (McGill University, Canada)
Abstract: The Magic Square Game is a wellknown quantum pseudotelepathy game; it is, in a certain sense, the smallest game to allow players to win with access to entanglement; but does not offer a winning strategy under Local Hidden Variables.
In this paper, we seek to establish the communication class of some variations on this classic game; by forcing the reveal of more than one column or row, and the game played on a smaller board (2x2), with a different modularity. In the last case, we further offer a quantum improving strategy upon the optimal strategy in Local Hidden Variables.
A General Coded Caching Scheme for Scalar Linear Function Retrieval Yinbin Ma (University of Illinois Chicago, USA)
Abstract: Coded caching aims to minimize the network's peaktime communication load by leveraging the information prestored in the users’ local cache. The original single file retrieval setting by MaddahAli and Niesen has been recently extended to general Scalar Linear Function Retrieval (SLFR) by Wan et al., who proposed a linear scheme that surprisingly achieves the same optimal load (under the constraint of uncoded cache placement) as in single file retrieval. This paper's goal is to characterize the conditions under which a general SLFR linear scheme is optimal and gain practical insights into why the specific choices made by Wan et al. work. This paper shows that the optimal decoding coefficients are necessarily the product of two terms, one only involving the encoding coefficients and the other only the demands. In addition, the relationships among the encoding coefficients are shown to be captured by the cycles of a certain graph. Thus, a general linear scheme for SLFR can be found by solving a spanning tree problem.
Role of Shared Key for Secure Communication over 2User Gaussian ZInterference Channel Somalatha U (Indian Institute of Technology Tirupati, India)
Abstract: In this work, the role of shared key in enhancing the secrecy performance is explored when users are operating in an interferencelimited environment. When users have access to a shared key, the secrecy performance of the system not only depends on the channel parameters but also on how the secret key has been used in the encoding process. To address this problem, a 2user Gaussian ZInterference channel with a shared key is considered. The paper proposes novel achievable schemes which differ from each other based on how the key has been used in the encoding process. The first scheme uses a combination of message splitting, superposition coding, onetime pad and stochastic encoding. The main novelty of the scheme is that the receiver experiencing interference can cancel some part of the interference without violating the secrecy constraint. The second scheme uses the shared key to encrypt the message and the last scheme uses the key as a part of the stochastic encoding. To get further insights into the role of shared key in enabling secure communication, the secure generalized degrees of freedom (SGDOF) region for different schemes are characterized for the weak/moderate interference regime, and optimality is also established for some specific cases. The developed results show how to scale the key rate with underlying channel parameters to improve the performance of the system. This work has been submitted to a conference for possible publications and an extension of this work is going to be submitted to a journal.
Database Matching Under Column Deletions Serhat Bakirtas (New York University, USA)
Abstract: Deanonymizing user identities by matching various forms of user data available on the internet raises privacy concerns. A fundamental understanding of the privacy leakage in such scenarios requires a careful study of conditions under which correlated databases can be matched. Motivated by synchronization errors in time indexed databases, in this work, matching of
random databases under random column deletion is investigated.
Adapting tools from information theory, in particular ones developed
for the deletion channel, conditions for database matching
in the absence and presence of deletion location information are
derived, showing that partial deletion information significantly
increases the achievable database growth rate for successful
matching. Furthermore, given a batch of correctlymatched rows,
a deletion detection algorithm that provides partial deletion
information is proposed and a lower bound on the algorithm's
deletion detection probability in terms of the column size and
the batch size is derived. The relationship between the database
size and the batch size required to guarantee a given deletion
detection probability using the proposed algorithm suggests that
a batch size growing doublelogarithmic with the row size is
sufficient for a nonzero detection probability guarantee.
Channel Estimation for Distributed Intelligent Reflecting Surfaces Assisted MultiUser MISO Systems Hibatallah Alwazani (University of British Columbia, Canada)
Abstract: Intelligent reflecting surfaces (IRSs)assisted wireless communication promises improved system performance, while posing new challenges in channel estimation (CE) due to the passive nature of the reflecting elements. Although a few CE protocols for IRSassisted multipleinput singleoutput (MISO) systems have appeared, they either require long channel training times or are developed under channel sparsity assumptions. Moreover, existing works focus on a single IRS, whereas in practice multiple such surfaces should be installed to truly benefit from the concept of reconfiguring propagation environments. In light of these challenges, this paper tackles the CE problem for the distributed IRSsassisted multiuser MISO system. An optimal CE protocol requiring relatively low training overhead is developed using Bayesian techniques under the practical assumption that the BSIRSs channels are dominated by the lineofsight (LoS) components. An optimal solution for the phase shifts vectors required at all IRSs during CE is determined and the minimum mean square error (MMSE) estimates of the BS users direct channels and the IRSsusers channels are derived. Simulation results corroborate the normalized MSE (NMSE) analysis and establish the advantage of the proposed protocol as compared to benchmark scheme in terms of training overhead.
Finite Block Length Achievable Rates for Interference Limited Scenarios Shraddha Sahoo (Indian Institute of Technology, Tirupati, India)
Abstract: Ultra Reliable Low latency Communication (URLLC) is one of the modes offered by 5G for scenarios where it is required to ensure high reliability under stringent delay constraints. The landmark result on channel coding proposed by Shannon shows that the error probability can be made very small as long as the rate is less than the channel capacity. For this, it is required to use codeword of long length and the long codeword can increase the latency in the communication. In many applications, where delay is an important factor, it is required to use codewords of short length and this can increase the error in the performance. This brings a fundamental question on how to tradeoff between rate, error and blocklength for a communication system.
In recent years, there has been progress made in determining the interplay between rate, error and blocklength for a pointtopoint system and the work of Yury Polyanskiy et al., has invigorated the research in this direction in the last decade. The impact of interference on the system performance under finite block length coding is not well understood in the existing literature. As performance of most of the wireless systems are limited by interference rather than by noise, it is important to understand the performance of various schemes in finite blocklength regime. To address this problem, this paper considers a 2user Gaussian interference channel and the performance of various schemes such as treating interference as noise, time sharing and interference cancellation are studied for various interference regimes. The goodput for various schemes are characterized and they give useful insights on system performance.
Role of Costs in Commitment over Noisy Channels Anuj Yadav and Manideep Mamindlapally (Indian Institute of Technology Patna and Indian Institute of Technology Kharagpur, India)
Abstract: Commitment is a widely used cryptographic primitive which finds application in practical problems like contractsigning, sealedbid auctions, secure multiparty computations, etc. A commitment protocol involves two mutually distrustful parties who interact over two phases viz., a commit phase followed by a reveal phase and aim to satisfy three security guarantees  soundness, concealment, and bindingness.
We study informationtheoretically secure commitment over general discrete memoryless channels (DMCs) under the requirement that the channel uses incurred costs and characterize the optimal throughput of DMCs under fairly general cost functions. This work significantly generalizes prior work on commitment capacity over unconstrained DMCs. Apart from a primal commitment capacity characterization; we also present a dual characterization. An interesting consequence of our dual characterization is that the output distribution optimizing the commitment capacity is unique even though there may exist multiple optimizing input distributions. We crucially develop our dual characterization to present the commitment capacity of an inputconstrained binary symmetric channel.
(A paper with coauthors Amitalok J. Budkuley and Manoj Mishra was recently accepted at ISIT 2021 conference)
Finite Block Length Information Theory and Its Importance in Mission Critical Applications Deeraj Kumar (Indian Institute of Technology Tirupati, India)
Abstract: 5G technologies will have a significant impact on many applications ranging from industrial control to health care. For such applications, it is required to ensure high reliability under stringent latency constraints. 5G technology is envisioned to support machinetomachine (M2M) communication rather than communication between human operated mobile terminals. It is highly important to maintain low latency in Mission critical applications. For such scenarios, it is not possible to use long packets as it will increase the latency. Many of
the existing communication techniques are not suitable for such applications as they are originally designed for long packets. According to Shannon's channel coding theorem, to ensure very low probability of error, rate of transmission should be less than that of channel capacity. In order to make this true, codewords of large length need to be used and this in turn increases latency. Some of the recent results in finite block length information theory have given new insights on how to tradeoff between rate, error and blocklength. This work
discusses some of the recent advances in finite block length information theory and its applications in the context of mission critical applications. This will be more like a tutorialbased work which discusses the recent progress in finite block length information theory, its relevance in M2M communications and some of the research problems in these directions.
Binary Decomposition and Coding Schemes for the PeakConstrained IM/DD Channel Sarah Bahanshal (University of British Columbia, Canada)
Abstract: This work develops a binary decomposition for
the peakconstrained intensitymodulation with directdetection
(IM/DD) channel. Then, it uses this decomposition, which we
call a Vector Binary Channel (VBC), to propose simple binary
coding schemes. The VBC consists of N bitpipes and,
different from the AvestimehrDiggaviTse linear deterministic
model, captures interactions between bitpipes thus preserves
capacity. Two practical coding schemes are then designed for
the VBC. The first is an Independent Coding scheme which
encodes independently and decodes successively while ignoring
the dependence between noises on different bitpipes. The second
improves upon the first by exploiting the knowledge of noises on
previously decoded bitpipes as a state for the current decoding
operation. The achievable rates of the schemes are compared
numerically with capacity, and shown to perform well with the
second scheme approaching capacity for signaltonoise ratio
(SNR) > 2.5 dB. The schemes can be implemented using practical
capacityachieving codes for the binary symmetric channel (BSC)
such as polar codes. Methods to improve the schemes at lowSNR
are discussed towards the end of the paper.
A ControlTheoretic Linear Coding Scheme for the Fading Gaussian Broadcast Channel with Feedback Siyao Li (University of Illinois at Chicago, USA)
Abstract: This poster proposes a linear coding scheme for the twouser fading additive white Gaussian noise broadcast channel, under the assumptions that: (i) perfect Channel State Information (CSI) is available at the receivers; (ii) unit delayed CSI along with channel output feedback (COF) is available at the transmitter. The proposed scheme is derived from a controltheoretic perspective that generalizes the communication scheme for the pointtopoint (P2P) fading Gaussian channel under the same assumptions by Liu et al.. The proposed scheme asymptotically achieves the rates of a posterior matching scheme, from the same authors, for a certain choice of parameters.
VisionBased RealTime Seam Tracking and Defect Detection in GMAW Fillet Welding Process Using Neural Networks Mobina Mobaraki (University of British Columbia, Canada)
Abstract: Goal: There has been a tremendous surge of interest in robotic welding industry to improve the quality of weld by realtime seam tracking, defect detection, and parameter optimization. To this end, many visionbased methods have shown promising results but a proper solution with satisfying tradeoff between accuracy, robustness, and speed is still an open question.
Method: Analyzing images during the welding process is a challenging task due to the high amount of spatter, heat distortion, illumination changes, changes in welding parameters, and mirror reflection. Therefore, conventional computer vision methods, which are mainly based on grey level segmentation, would not be robust and accurate enough for the proposed goal. As a result, in search of powerful machine learning algorithms, neural networks including CNN, RNN, LSTM, and predictive models might be appropriate options to efficiently analyze the huge amount of data.
In this work, we aim to use LSTM or PredNet to predict the weld pool morphology in future frames as it contains valuable information about the defects and seam centerline. Having the geometry of the weld pool in the upcoming frames and external information from welding parameters, another neural network like classNet could be implemented to predict the defects and find the deviation between the wire and seam centerline.
Challenges: Although neural networks have the potential to help achieve the highquality weld, there are some challenges to face. One of which would be labeling the training data as realtime detection of the shape of the deformable weld pool in the presence of disturbances is not an easy task. In this regard, an unsupervised method could be applied. The accuracy of the weld pool boundary, seam deviation, and defect detection can be evaluated by manually annotating a few frames of the weld pool boundary or using interferometry.
Another challenge would be defining the loss function. Based on the idea of active contours, the energy function is defined to find the boundaries which are smooth, have the highest gradient level and are the most similar points to the initial predefined shape of the weld pool.
Novelties: Besides dealing with the challenges, there will be some innovative ideas to make the results more accurate. One idea is to select images in background current where there are fewer light spatters. The other idea is to use multisensor technology and add information from a microphone or a thermometer to our existing neural network. Our ultimate goal is to develop and implement a proper deep learning system involving the mentioned novelties to achieve the mentioned goals.
Universal Graph Compression: Stochastic Block Models Ziao Wang (The University of British Columbia, Canada)
Abstract: Motivated by the prevalent data science applications of processing largescale graph data such as social networks, web graphs, and biological networks, as well as the high I/O and communication costs of storing and transmitting such data, this work investigates universal compression of data appearing in the form of a labeled graph. In particular, we consider a widely used random graph model, stochastic block model (SBM), which captures the clustering effects in social networks. A universal graph compressor is proposed, which achieves the optimal compression rate for a wide family of SBMs with edge probabilities from O(1) to (1/(n^{2epsilon})) for any 0 < epsilon < 1.
Existing universal compression techniques are developed mostly for stationary ergodic onedimensional sequences with entropy linear in the number of variables. However, the adjacency matrix of SBM has complex twodimensional correlations and sublinear entropy in the sparse regime. These challenges are alleviated through a carefully designed transform that converts twodimensional correlated data into almost i.i.d. blocks. The blocks are then compressed by a KrichevskyTrofimov compressor, whose length analysis is generalized to arbitrarily correlated processes with identical marginals.
KramersKronig Based Optical OFDM for Bandlimited Visible Light Communications Ruowen Bai (McMaster University, Canada)
Abstract: Visible light communications (VLC) have attracted increasing recent interests thanks to the availability of hundreds of terahertz of licensefree spectrum. Data are modulated onto optical intensity of an illumination LED and are detected by a photodiode (PD). All emissions must be nonnegative, constrained in their mean and bandlimited due to the inherent lowpass nature of illumination LEDs.
In this work, we propose a novel approach to optical OFDM, KKOOFDM, for these channels based on the KramersKronig relations which is inherently nonnegative and strictly bandlimited. Unlike earlier intensity modulated OFDM approaches, our KKOOFDM does not have out of band clipping noise nor does it require the addition of an explicit DCbias. The information rate of KKOOFDM is computed numerically and estimated analytically and is further contrasted against existing capacity bounds which are derived discretetime AWGN channels. Results for the BER of uncoded KKOOFDM as presented and shown to provide a gain of 1dB over conventional approaches.
Visualization of Digital Twin Data Jovana Plavsic (University of British Columbia, Canada)
Abstract: Realtime information systems are the core of Industry 4.0. They are being utilized for a variety of purposes, for example, remote collaboration, data analysis, decisionmaking, monitoring, maintenance, and fault detection. Digital twins are an example of such a robust system with a realtime function. The information which flows through the digital twin must be presented effectively to the enduser. Visualization is the most widely employed technique that ensures the information is quickly grasped and interpreted unambiguously. Visualization for digital twin technology differs from traditional visualization methods due to the increased complexity and performance requirements of the system. Thus, information theorists and visualization designers must devise new technical and design solutions. We present an overview of challenges for digital twin design from the visualization aspect. These include, but are not limited to, a large amount of data, performance and optimization, reliability and accuracy, privacy and security, information flow, implementation cost, and workflow standardization. Identification of prominent challenges for visualization for digital twins is the first step to overcoming the technical and design limitations and establishing collaboration between all involved stakeholders. Even though the comprehensive list is beyond the scope of this poster, we believe this overview will motivate the advancement of visualization techniques for digital twins.
Swarm Intelligence for Decentralized Decision Making in Mining Matias Pinto (University of British Columbia, Canada)
Abstract: Nowadays, most AI systems are based on closed systems with fixed rules and static databases. Reallife, by contrast, is openended, where no data perfectly reflects the everchanging world (Marcus and Davis, 2019). Disrupted A.I technologies for Extractives Industries are not away from these limitations. While it is possible to find AI solutions by Machine Learning applications to support mineral exploration or by the unceasing expansion of autonomous truck systems at remote operations, they still rely on closeended systems with fixed rules where the random component of data is minimized. Since complex industrial systems are exposed to critical failures that usually comprise human life, sustainability, and economic progress, they demand the reliable and efficient development of technologies able to work under harsh conditions and applicable to scenarios of constant uncertainty.
This research aims to study the applicability of Swarm Intelligence to support the smart and sustainable decisionmaking process across Industrial Operations. These algorithms take advantage of collective behaviors from decentralized actors (equipment) to create a selforganized system of operations where the data intercorrelated between actors support the multivariable optimization of production, safety, and sustainable KPIs. At the same time, this research complements Swarm Intelligence with the utilization of computer vision and Machine Learning algorithms that are the basis for the detection, and prediction of Industrial accidents, environmental disruptions, and operational failures. These algorithms work by an automatic retrieving of trained parameters from multiple sources of unstructured data (images), generating a collective system of learning assuming the stochastic component of an openended system.
Finally, the algorithms will be tested on a real application in the Mining Industry where the ingestion of unstructured and intercorrelated data from interconnected equipment will allow the incorporation of sustainable KPIs related to safety and reduction of carbonbased components such as fuel and tire consumption. This applicability aims to demonstrate a smarter way of optimization for industrial applications based on sustainable and productive factors that make possible the computing capacity required by federated learning. Finally, the adaptation of collective behaviors for an Industrial Process will avoid centralizing training over a static closed system, ensuring a more comprehensive and smarter decisionmaking process.
Permutation Recovery by Linear Decoding: Optimality and Asymptotics Minoh Jeong (University of Minnesota, USA)
Abstract: The problem of recovering data permutations from noisy observations is becoming a common task of modern communication and computing systems. We investigate the following question on noisy data permutation recovery: Given a noisy observation of a permuted data set, according to which permutation was the original data sorted? Our focus is on scenarios where data is generated according to a given distribution, and the noise is additive Gaussian with zeromean and a given covariance matrix. We pose this problem within a hypothesis testing framework, and our objective is twofold. First, we characterize sufficient conditions on the noise covariance matrix that ensure that the optimal decision criterion (that is, the criterion that minimizes the probability of error) has a complexity that is at most polynomial in the data size. Towards this end, we focus on the linear regime, that is, the optimal decision criterion consists of computing a permutationindependent linear function of the noisy observation followed by a sorting operation. We find necessary and sufficient conditions for the optimality of such a linear regime. Second, we characterize a general expression for the probability of error and study its rate of convergence in the linear regime for some practically relevant asymptotic regimes, such as when the data size grows to infinity, and in the high and low noise regimes.
Scheduling and Relaying for HalfDuplex Wireless Networks: Efficiency and Optimality Sarthak Jain (University of Minnesota, Twin Cities, USA)
Abstract: We consider Gaussian nrelay halfduplex wireless networks, where the source communicates with the destination with the help of n halfduplex relays. Computing the Shannon capacity, or a provable optimal approximation of it, is notoriously difficult and it requires an optimization over the 2^n possible listen/transmit configuration states of the n relays. Our main goal is to propose suitable solutions (such as transmission schemes) that are both provable optimal and of low complexity. Towards this end, we have considered two types of network simplifications: (i) simple scheduling and (ii) topology selection. In simple scheduling, the focus is on designing relaying and scheduling protocols using only a subset of the network states that allow to achieve rates that are close to the Shannon capacity of the network. Specifically, for a Gaussian halfduplex nrelay network without a direct link from the source to the destination, we present sufficient conditions under which an energyefficient schedule consisting of at most n+1 states (out of the 2^n possible states) is optimal and the approximate capacity (namely, a constant gap approximation of the Shannon capacity, where the gap depends only on n) can be computed efficiently, that is, in polynomial time in n. Moreover, for the n=2 relay case, we completely characterize the operation of the network by providing an optimal scheduling, a relaying scheme and a closed form expression for the approximate capacity under different network conditions. In topology selection, the focus is on characterizing bounds on the fraction of the capacity of the entire network that can be guaranteed by operating only a subset of the relays in the network. Towards this end, for a Gaussian halfduplex network, where the source communicates with the destination by hopping information through a layer of n noninterconnected relays, we present fundamental tight guarantees on the fraction of the approximate capacity of the entire network that can be achieved by a single relay.
A Perturbative Channel Model for Optical Fiber Reza Rafie Borujeny (University of Toronto, Canada)
Abstract: Transmission of signals over the optical fiber described by the nonlinear Schrodinger equation is considered. Perturbation methods are used to simplify this nonlinear equation. The obtained channel model can be used to find accurate noise statistics in the socalled pseudolinear regime of operation.
Dynamic Network Slicing and VNF Embedding in QoSAware Softwarized 5G Networks Deborsi Basu (Indian Institute of Technology, Kharagpur & IEEE Student Member, India)
Abstract: The inclusion of Network Function Virtualization (NFV) and Software Defined Networking in nextgeneration communication networks (e.g 5G and beyond) influence Telecom. Service Providers (TSPs) to deploy Virtual Network Functions (VNFs) to improve network performance without incurring additional usage of network resources. The softwarization and virtualization of network resources uplift the Network Slicing (NS) concept to optimally place the VNF instances over Service Function Chains (SFCs) for superior service delivery. Limited network capacity and storage work as a major hindrance towards better QualityofService (QoS), so optimal VNF embedding over SFCs becomes an obligatory aspect. In this work, we have studied the VNF embedding problem (VNFEP) over SFC instances for Softwarized 5G Networks. We have proposed a novel approach for dynamic VNF sharing over multiple SFCs considering the flowrequests of SFC for individual Network Slice. The interslice coordinations are done considering the common service requests among independent and heterogeneous Slices. The mathematical formulation follows a MILP based optimization approach which optimally controls the VNF sharing to increase network efficiency and hardware usage. This flexible and shareable VNF embedding approach (FlexShareVNF) results in significant energy efficient service delivery in low latency environment and our performance evaluation also supports the claim. This approach will be extremely helpful for smooth network upgradation in resource restricted environment for future network advancements.
An Information Bottleneck Problem with Rényi's Entropy JianJia Weng (Queen's University, Canada)
Abstract: In the past decade, the optimization of information measures such as entropy, crossentropy, and mutual information has been widely and successfully adopted in machine learning algorithms and transmission systems. In particular, numerous results are related to the socalled information bottleneck (IB) method whose objective is to extract from observed data the maximal relevant information about a hidden variable subject to a mutual information complexity constraint. In this poster, we present an IB problem with the objective of obtaining a most informative representation of a hidden feature subject to a Rényi entropy complexity constraint. The optimal bottleneck tradeoff between relevance (measured via Shannon's mutual information) and Rényi entropy cost is defined. We also derive an operational characterization for the optimal tradeoff by demonstrating that the optimal Rényi entropyrelevance tradeoff is achievable by a simple timesharing scalar coding scheme and that no coding scheme can provide better performance.
On Implementing a Neural Network Based Costa's Dirty Paper Coding Scheme Zhenyu Zhang (University of British Columbia, Canada)
Abstract: Dirty paper coding (DPC) is a coding scheme that is important in communication over a channel with state known at the transmitter. Such a channel arises in a multipleinputmultipleoutput (MIMO) broadcast channel (BC) and related scenarios where an interference state is known at the transmitter side. Generally, DPC is based on a random coding argument, which makes it difficult to implement in practice. However, a neural network (NN) may be useful for realizing a DPClike scheme which achieves good performance in a channel with state. This paper presents initial results investigating the application of a NN in constructing Costa's DPC for a Gaussian channel with an additive Gaussian state. An NNbased DPC scheme is designed and tested, demonstrating a promising interference suppression capability, remarkably when the interferencetonoiseratio (INR) is large compared to the signaltonoise ratio (SNR).
Cache Updating Strategy Minimizing the Age of Information with TimeVarying Files’ Popularities Haoyue Tang (Tsinghua University, China)
Abstract: We consider updating strategies for a local cache which downloads timesensitive files from a remote server through a bandwidthconstrained link. The files are requested randomly from the cache by local users according to a popularity distribution which varies over time according to a Markov chain structure. We measure the freshness of the requested timesensitive files through their Age of Information (AoI). The goal is then to minimize the average AoI of all requested files by appropriately designing the local cache's downloading strategy. To achieve this goal, the original problem is relaxed and cast into a Constrained Markov Decision Problem (CMDP), which we solve using a Lagrangian approach and Linear Programming. Inspired by this solution for the relaxed problem, we propose a practical cache updating strategy that meets all the constraints of the original problem. Under certain assumptions, the practical updating strategy is shown to be optimal for the original problem in the asymptotic regime of a large number of files.
For a finite number of files, we show the gain of our practical updating strategy over the traditional squarerootlaw strategy (which is optimal for fixed non timevarying file popularities) through numerical simulations.
Active PrivacyUtility TradeOff Against a Hypothesis Testing Adversary Ecenaz Erdemir (Imperial College London, United Kingdom (Great Britain))
Abstract: We consider a user releasing her data containing some personal information in return of a service. We model user's personal information as two correlated random variables, one of them, called the secret variable, is to be kept private, while the other, called the useful variable, is to be disclosed for utility. We consider active sequential data release, where at each time step the user chooses from among a finite set of release mechanisms, each revealing some information about the user's personal information, i.e., the true hypotheses, albeit with different statistics. The user manages data release in an online fashion such that the maximum amount of information is revealed about the latent useful variable, while the confidence for the sensitive variable is kept below a predefined level. For the utility, we consider both the probability of correct detection of the useful variable and the mutual information between the useful variable and released data. We formulate both problems as a Markov decision process, and numerically solve them by advantage actorcritic deep reinforcement learning.
The Average Coding Rate of LargeScale Wireless Networks in the Finite BlockLength Regime Nourhan Hesham (University of British Columbia, Canada)
Abstract: Recently, after the spread of social media applications where livestreaming, image uploading, video calls, and other activities which require high data rate, more availability, low latency, and high reliability on the uplink (UL) and downlink (DL) channel, future technologies are pushed towards densifying the networks as one solution for the arising demand. However, achieving low latency constraints requires the use of short codes which makes the analysis using Shannon's Capacity expression becomes imprecise. To remedy this, in this poster, an average coding rate expression is derived for a largescale network with short codewords and for finite constellation sets using stochastic geometry and the theory of coding in the finite blocklength regime. Additionally, simulations are performed to compare the theoretical results and the achievable rates of multilevel polar coded modulation.
Interactive Inference Under Information Constraints Ziteng Sun (Cornell University, USA)
Abstract: We study the role of interactivity in distributed statistical inference under information constraints, e.g., communication constraints and local differential privacy. We focus on the tasks of goodnessoffit testing and estimation of discrete distributions. From prior work, these tasks are well understood under noninteractive protocols. Extending these approaches directly for interactive protocols is difficult due to correlations that can build due to interactivity; in fact, gaps can be found in prior claims of tight bounds of distribution estimation using interactive protocols.
We propose a new approach to handle this correlation and establish a unified method to establish lower bounds for both tasks. As applications, we obtain optimal bounds for both estimation and testing under local differential privacy and communication constraints. We also provide an example of a natural testing problem where interactivity helps.
Principal Bit Analysis: Autoencoding with SchurConcave Loss Sourbh Bhadane (Cornell University, USA)
Abstract: We consider a linear autoencoder in which the
latent variables are quantized, or corrupted by
noise, and the constraint is Schurconcave in the
set of latent variances. Although finding the optimal
encoderdecoder pair for this setup is a
nonconvex optimization problem, we show that
decomposing the source into its principal components
is optimal. If the constraint is strictly
Schurconcave and the empirical covariance matrix
has only simple eigenvalues, then any optimal
encoderdecoder must decompose the source
in this way. As one application, we consider a
strictly Schurconcave constraint that estimates
the number of bits needed to represent the latent
variables under fixedrate encoding, a setup that
we call Principal Bit Analysis (PBA). This yields
a practical, generalpurpose, fixedrate compressor
that outperforms existing algorithms. As a
second application, we show that a prototypical
autoencoderbased variablerate compressor
is guaranteed to decompose the source into its
principal components.
CrossLayer Communication over Fading Channels with Adaptive Decision Feedback Borna Sayedana (McGill University, Canada)
Abstract: In this poster, crosslayer design of transmitting data packets over AWGN fading channel with adaptive decision feedback is considered. The transmitter decides the number of packets to transmit and the threshold of the decision feedback based on the queue length and the channel state. The transmit power is chosen such that the probability of error is below a prespecified threshold. We model the system as a Markov decision process and use ideas from lattice theory to establish qualitative properties of optimal transmission strategies. In particular, we show that: (i) if the channel state remains the same and the number of packets in the queue increase, then the optimal policy either transmits more packets or uses a smaller decision feedback threshold or both; and (ii) if the number of packets in the queue remain the same and the channel quality deteriorates, then the optimal policy either transmits fewer packets or uses a larger threshold for the decision feedback or both. We also show under rate constraints that if the channel gains for all channel states are above a threshold, then the “or” in the above characterization can be replaced by “and”.
MultiMessage Private Information Retrieval Using Random Input Key Ningze Wang (Texas A&M University, USA)
Abstract: Consider the problem of multimessage private information retrieval(MPIR) from N noncolluding replicated database. In MPIR problem, user aims to retrieve D messages out of K stored messages. We assume two different privacy constraints, joint privacy(MPIRJP) and individual privacy(MPIRIP). We propose a new code for one certain (N, D)=(3,2) MPIRJP problem, and show that it has optimal rate for even K, nearoptimal rate for odd K. The result of our MPIRIP schemes indicates that relaxed privacy constraint can potentially help lower the download cost.
Communication with Adversary Identification in Byzantine Multiple Access Channels Neha Sangwan (Tata Institute of Fundamental Research, India)
Abstract: We introduce the problem of determining the identity of a byzantine user (internal adversary) in a communication system. We consider a twouser discrete memoryless multiple access channel where either user may deviate from the prescribed behaviour. Owing to the noisy nature of the channel, it may be overly restrictive to attempt to detect all deviations. In our formulation, we only require detecting deviations which impede the decoding of the nondeviating user's message. When neither user deviates, correct decoding is required. When one user deviates, the decoder must either output a pair of messages of which the message of the nondeviating user is correct or identify the deviating user. The users and the receiver do not share any randomness. The results include a characterization of the set of channels where communication is feasible, and an inner and outer bound on the capacity region.
(This work has been accepted to ISIT 2021. A copy is available at https:arxiv.orgpdf2105.03380v1.pdf)
Spectral Efficiency of MultiAntenna Index Modulation Bharath Shamasundar (University of Texas at Dallas, USA)
Abstract: Index modulation emits information through the index of the activated component of a vector signal as well as the value of the activated component. Indexing could occur across antennas, subcarriers, or other degrees of freedom. When index modulation is applied to antennas, it is known as spatial modulation. Earlier bounds or estimates for the spectral efficiency of spatial modulation have been too loose for determining the parameters of coding and modulation, which are important in practice. Furthermore, the best bounds formerly available did not effectively elucidate the relationship of spatial modulation capacity with SIMO and MIMO capacity at low and highSNR. The present work develops novel, tighter bounds on the spectral efficiency of spatial modulation. Specifically, for a 4x2 antenna configuration at 8 bitssHz, our results are 2dB tighter than the best bounds available in the literature.
Training Quantum Machine Learning Models with Aid of Kernels Neel Gandhi (Pandit Deendayal Energy University, India)
Abstract: Nearterm quantum devices and faulttolerant quantum computers have led to development in the field of quantum machine learning. In recent years, researchers have been working on replacing classical machine learning models with quantum machine learning models using quantum circuits. Also, the mathematical structure of quantum model has close correlation with kernelbased methods like support vector machine, where data has been analyzed using high dimensional Hilbert spaces calculated by inner products through measurement. The bridge between kernels and quantum machine learning models is solved with the use of classical kernel applied on quantum device. The approach is found to be suitable for providing improvised or comparable results with respect to classical machine learning models. In the given work, we have compared quantum models based on kernelbased training against variational training based models specifically in terms of number of circuits. It can be noted that kernelbased training performs training with relatively less number of quantum circuit evaluations as compared to variational training. Also, considering each evaluation circuit is much shorter in case of kernelbased training rather than variational circuit training. The efficiency of kernelbased and variationalbased circuits usually depends upon the number of parameters used for developing the variational quantum model. In the current scenario, kernelbased method could be a better alternative compared to variational circuit due to their hardware capability requiring less number of parameters equivalent to the order of the training set size. The proposed kernelbased training might be useful for faulttolerant and nearterm quantum models, where kernel is helpful for computing the distances between the data encoding of the quantum states. At last, kernelbased approach for quantum machine learning could be helpful in encoding the data for corresponding quantum states that can potentially be effective rather than classical machine learning models in terms of performance, time, and efficiency.
Orthogonal Time Frequency Space Modulation (OTFS) Surabhi Garudangiri Dayananda (University of Texas at Dallas, USA)
Abstract: Future wireless communication systems are envisioned to support diverse requirements that include high mobility application scenarios such as highspeed trains, and vehicletovehicle and vehicletoinfrastructure communications. The dynamic nature of wireless channels in such scenarios makes them doublydispersive in nature. Orthogonal time frequency space (OTFS) modulation is a recent two dimensional (2D) modulation technique specially suited for doublydispersive wireless channels. A fundamental feature of OTFS modulation is that the information symbols in OTFS modulation are multiplexed in delayDoppler domain rather than in timefrequency domain as done in conventional multicarrier modulation techniques. An advantage of signaling in the delayDoppler domain is that a channel rapidly varying in time manifests as a slowly varying sparse channel when viewed in the delayDoppler domain, which simplifies channel estimation in rapidly time varying wireless channels.The focus of this poster is to present the fundamental ideas behind OTFS modulation and discuss some recent results and indicate some interesting directions for research in this area.
PrivacyPreserving Coded Mobile Edge Computing for LowLatency Distributed Inference Reent Schlegel (Simula UiB, Norway)
Abstract: This poster considers a mobile edge computing scenario where multiple devices want to perform a linear inference Wx on some local data x given a networkside matrix W. It presents a coding scheme that provides informationtheoretic privacy against z colluding (honestbutcurious) edge servers while minimizing the overall latency in the presence of straggling servers. The scheme combines Shamir's secret sharing to yield data privacy and straggler mitigation with replication to provide spatial diversity for the download. Furthermore, additional coding on W offers higher resiliency to stragglers.
EndToEnd Rate Enhancement in CRAN Using MultiPair TwoWay Computation Mahmoud Hasabelnaby (University of British Columbia, Canada)
Abstract: Cloud radioaccess networks (CRAN) have been proposed as an enabling technology for keeping up with the requirements of nextgeneration wireless networks. Most existing works on CRAN study the uplink or the downlink separately. However, designing the uplink and the downlink jointly may bring additional advantages, especially if message sourcedestination information is taken into account. In this poster, this idea is demonstrated by considering pairwise message exchange between users in a CRAN. A multipair twoway transmission scheme is proposed which targets maximizing the endtoend sum data rates. In the proposed scheme, a multipair computation strategy is used, where the baseband processing unit (BBU) pool decodes integer linear combinations of paired users’ codewords instead of decoding linear combinations of individual codewords. The BBU pool then precodes the computed signals, compresses and forwards them to the remote radio heads (RRHs), which in turn decompress the signals and send them to the users. Finally, each user decodes its desired message using its own message as side information. The achievable rate of this scheme is derived, optimized, and evaluated numerically. Simulation results reveal that significant endtoend rate improvement can be achieved using the proposed scheme compared to existing schemes.
Improving Diagnosis of Developmental Hip Dysplasia Using LocationTagged 2D Ultrasound Ammarah Kaderdina (University of British Columbia, Canada)
Abstract: 2D ultrasound probes are currently used to screen for developmental dysplasia of the hip (DDH) in infants. However, recent studies have shown 3D US volumes enables more repeatable DDH measurements when compared to using 2D US images. Unfortunately, 3D US probes used to collect these volumes are expensive and generally not available in pediatric clinics. Within this research I aim to reconstruct 3D US volumes from 2D US image frames. To do this, I first need to locationtag each image frame; that is, given a 3D space, identifying where in that space each image frame is located. There are three locationtagging techniques I examine: deep learning, using an inertial sensor, and optical tracking. Within this poster, I focus on the deep learning method, where a neural network is trained to output translation and rotation movement between consecutive frames. With the US image sequence and accurate locationtagging information, a 3D US volume can be reconstructed to closely resemble an infant's hip. From this volume, metrics used to diagnose for DDH can be extracted and compared with metrics extracted from a groundtruth US volume.
Learning to Reflect and to Beamform for Intelligent Reflecting Surface with Implicit Channel Estimation Tao Jiang (University of Toronto, Canada)
Abstract: Intelligent reflecting surface (IRS), which consists of a large number of tunable reflective elements, is capable of enhancing the wireless propagation environment in a cellular network by intelligently reflecting the electromagnetic waves from the basestation (BS) toward the users. The optimal tuning of the phase shifters at the IRS is, however, a challenging problem, because due to the passive nature of reflective elements, it is difficult to directly measure the channels between the IRS, the BS, and the users. Instead of following the traditional paradigm of first estimating the channels then optimizing the system parameters, this paper advocates a machine learning approach capable of directly optimizing both the beamformers at the BS and the reflective coefficients at the IRS based on a system objective. This is achieved by using a deep neural network to parameterize the mapping from the received pilots (plus any additional information, such as the user locations) to an optimized system configuration, and by adopting a permutation invariant/equivariant graph neural network (GNN) architecture to capture the interactions among the different users in the cellular network. Simulation results show that the proposed implicit channel estimation based approach is generalizable, can be interpreted, and can efficiently learn to maximize a sumrate or minimumrate objective from a much fewer number of pilots than the traditional explicit channel estimation based approaches.
Deep Learning for Distributed Channel Feedback and Multiuser Precoding in FDD Massive MIMO Kareem Attiah (University of Toronto, Canada)
Abstract: This work shows that deep neural network (DNN) can be used for efficient and distributed channel estimation, quantization, feedback, and downlink multiuser precoding for a frequencydivision duplex massive multipleinput multipleoutput
system in which a base station (BS) serves multiple mobile users, but with ratelimited feedback from the users to the BS. A key observation is that the multiuser channel estimation and feedback problem can be thought of as a distributed source coding problem. In contrast to the traditional approach where the channel state information (CSI) is estimated and quantized at each user independently, this work shows that a joint design of pilots and a new DNN architecture, which maps the received pilots directly into feedback bits at the user side then maps the
feedback bits from all the users directly into the precoding matrix
at the BS, can significantly improve the overall performance. Numerical results show that the DNNbased approach with short pilot sequences and very limited feedback overhead can already approach the performance of conventional linear precoding schemes with full CSI.
Keyless Covert Communication via Channel State Information Hassan ZivariFard (University of Texas at Dallas, USA)
Abstract: We consider the problem of covert communication over a statedependent channel when the channel state is available either noncausally, causally, or strictly causally, either at the transmitter alone or at both transmitter and receiver. Covert communication with respect to an adversary, called ‘‘warden,’’ is one in which, despite communication over the channel, the warden's observation remains indistinguishable from an output induced by innocent channelinput symbols. Covert communication involves fooling an adversary in part by a proliferation of codebooks; for reliable decoding at the legitimate receiver, the codebook uncertainty is typically removed via a shared secret key that is unavailable to the warden. In contrast to previous work, we do not assume the availability of a shared key at the transmitter and legitimate receiver. Instead, shared randomness is extracted from the channel state in a manner that keeps it secret from the warden, despite the influence of the channel state on the warden's output. When the channel state is available at the transmitter and receiver, we derive the covert capacity region. When the channel state is only available at the transmitter, we derive inner and outer bounds on the covert capacity. We provide examples for which the covert capacity is positive with knowledge of channel state information but is zero without it.
Caching for Secrecy over the Binary Erasure Wiretap Channel Morteza Shoushtari (Brigham Young University, USA)
Abstract: We show that caching can aid in achieving secure communications by considering a wiretap scenario where the transmitter and legitimate receiver share access to a secure cache, and an eavesdropper is able to tap transmissions over a binary erasure wiretap channel during the delivery phase of a caching protocol. The scenario under consideration gives rise to a new channel model for wiretap coding that allows the transmitter to effectively choose a subset of bits to erase at the eavesdropper by caching the bits ahead of time. The eavesdropper observes the remainder of the coded bits through the wiretap channel for the general case. In the wiretap typeII scenario, the eavesdropper is able to choose a set of revealed bits only from the subset of bits not cached. We present a coding approach that allows efficient use of the cache to realize a caching gain in the network, and show how to use the cache to optimize the informationtheoretic security in the choice of a finite blocklength code and the choice of the cached bit set. To our knowledge, this is the first work on explicit algorithms for secrecy coding in any type of caching network.
